Test Report: Docker_Linux_crio_arm64 22061

                    
                      1c88f6d23ea396bf85affe6630893acb8f160428:2025-12-11:42722
                    
                

Test fail (40/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.59
44 TestAddons/parallel/Registry 14.26
45 TestAddons/parallel/RegistryCreds 0.48
46 TestAddons/parallel/Ingress 144.91
47 TestAddons/parallel/InspektorGadget 5.27
48 TestAddons/parallel/MetricsServer 5.34
50 TestAddons/parallel/CSI 42.28
51 TestAddons/parallel/Headlamp 3.14
52 TestAddons/parallel/CloudSpanner 5.26
53 TestAddons/parallel/LocalPath 9.4
54 TestAddons/parallel/NvidiaDevicePlugin 6.38
55 TestAddons/parallel/Yakd 6.27
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.32
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.1
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.47
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.48
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.42
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.78
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.19
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.74
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.1
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.41
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.69
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.41
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.11
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 100.12
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.29
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.27
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.54
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.61
293 TestJSONOutput/pause/Command 2.34
299 TestJSONOutput/unpause/Command 2.31
358 TestKubernetesUpgrade 796.71
384 TestPause/serial/Pause 6.48
455 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7200.062
x
+
TestAddons/serial/Volcano (0.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable volcano --alsologtostderr -v=1: exit status 11 (590.184932ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:08.636631   11815 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:08.638663   11815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:08.638707   11815 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:08.638730   11815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:08.639046   11815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:08.639423   11815 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:08.639858   11815 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:08.639907   11815 addons.go:622] checking whether the cluster is paused
	I1210 23:55:08.640046   11815 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:08.640110   11815 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:08.647138   11815 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:08.684739   11815 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:08.684796   11815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:08.702075   11815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:08.813613   11815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:08.813752   11815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:08.844785   11815 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:08.844808   11815 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:08.844814   11815 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:08.844818   11815 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:08.844825   11815 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:08.844829   11815 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:08.844833   11815 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:08.844836   11815 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:08.844839   11815 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:08.844845   11815 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:08.844849   11815 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:08.844852   11815 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:08.844855   11815 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:08.844858   11815 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:08.844862   11815 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:08.844867   11815 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:08.844876   11815 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:08.844879   11815 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:08.844882   11815 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:08.844885   11815 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:08.844890   11815 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:08.844893   11815 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:08.844895   11815 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:08.844899   11815 cri.go:89] found id: ""
	I1210 23:55:08.844956   11815 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:08.860051   11815 out.go:203] 
	W1210 23:55:08.862946   11815 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:08.862983   11815 out.go:285] * 
	* 
	W1210 23:55:09.131758   11815 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:09.134747   11815 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 13.609608ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003825527s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003356411s
addons_test.go:394: (dbg) Run:  kubectl --context addons-903947 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-903947 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-903947 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.744203141s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 ip
2025/12/10 23:55:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable registry --alsologtostderr -v=1: exit status 11 (252.028768ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:33.535166   12734 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:33.535372   12734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:33.535384   12734 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:33.535390   12734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:33.535630   12734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:33.535891   12734 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:33.536273   12734 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:33.536303   12734 addons.go:622] checking whether the cluster is paused
	I1210 23:55:33.536417   12734 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:33.536433   12734 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:33.536922   12734 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:33.554375   12734 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:33.554439   12734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:33.573169   12734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:33.681514   12734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:33.681600   12734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:33.709179   12734 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:33.709203   12734 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:33.709209   12734 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:33.709224   12734 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:33.709228   12734 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:33.709255   12734 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:33.709259   12734 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:33.709262   12734 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:33.709265   12734 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:33.709272   12734 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:33.709277   12734 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:33.709281   12734 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:33.709284   12734 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:33.709291   12734 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:33.709299   12734 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:33.709304   12734 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:33.709307   12734 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:33.709321   12734 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:33.709327   12734 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:33.709330   12734 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:33.709336   12734 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:33.709342   12734 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:33.709345   12734 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:33.709348   12734 cri.go:89] found id: ""
	I1210 23:55:33.709405   12734 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:33.724410   12734 out.go:203] 
	W1210 23:55:33.727534   12734 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:33.727560   12734 out.go:285] * 
	* 
	W1210 23:55:33.731895   12734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:33.734914   12734 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.26s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.242153ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-903947
addons_test.go:334: (dbg) Run:  kubectl --context addons-903947 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (264.317009ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:56:30.951394   14300 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:56:30.951552   14300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:30.951562   14300 out.go:374] Setting ErrFile to fd 2...
	I1210 23:56:30.951568   14300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:30.951837   14300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:56:30.952144   14300 mustload.go:66] Loading cluster: addons-903947
	I1210 23:56:30.952522   14300 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:30.952544   14300 addons.go:622] checking whether the cluster is paused
	I1210 23:56:30.952655   14300 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:30.952670   14300 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:56:30.953178   14300 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:56:30.970035   14300 ssh_runner.go:195] Run: systemctl --version
	I1210 23:56:30.970095   14300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:56:30.988294   14300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:56:31.101613   14300 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:56:31.101697   14300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:56:31.136397   14300 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:56:31.136420   14300 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:56:31.136427   14300 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:56:31.136431   14300 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:56:31.136434   14300 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:56:31.136438   14300 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:56:31.136441   14300 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:56:31.136444   14300 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:56:31.136447   14300 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:56:31.136458   14300 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:56:31.136462   14300 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:56:31.136465   14300 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:56:31.136468   14300 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:56:31.136471   14300 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:56:31.136473   14300 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:56:31.136482   14300 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:56:31.136488   14300 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:56:31.136495   14300 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:56:31.136498   14300 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:56:31.136501   14300 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:56:31.136505   14300 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:56:31.136508   14300 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:56:31.136511   14300 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:56:31.136514   14300 cri.go:89] found id: ""
	I1210 23:56:31.136565   14300 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:56:31.152015   14300 out.go:203] 
	W1210 23:56:31.154863   14300 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:56:31.154893   14300 out.go:285] * 
	* 
	W1210 23:56:31.159249   14300 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:56:31.162162   14300 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-903947 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-903947 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-903947 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [508e1395-9f79-4b09-94f2-ad131810e174] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [508e1395-9f79-4b09-94f2-ad131810e174] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003766858s
I1210 23:55:55.554678    4875 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.953851026s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-903947 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-903947
helpers_test.go:244: (dbg) docker inspect addons-903947:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b",
	        "Created": "2025-12-10T23:52:43.809539023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 6270,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:52:43.874839573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/hosts",
	        "LogPath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b-json.log",
	        "Name": "/addons-903947",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-903947:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-903947",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b",
	                "LowerDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-903947",
	                "Source": "/var/lib/docker/volumes/addons-903947/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-903947",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-903947",
	                "name.minikube.sigs.k8s.io": "addons-903947",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "958601d874b510856d61987408247d7176bcb83ca551675ae37eecc1197cff2c",
	            "SandboxKey": "/var/run/docker/netns/958601d874b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-903947": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:9b:6c:a9:93:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3cd589a87a23a69daa73698c71df3ec112b465d3a6d200d824f818ffb9afcf6a",
	                    "EndpointID": "660a3415a965e159c03e3332115143bcb326eb7a16dccb97694d8a88b14d043d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-903947",
	                        "2f5b93e82992"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-903947 -n addons-903947
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-903947 logs -n 25: (1.488267188s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-685635                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-685635 │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ start   │ --download-only -p binary-mirror-844379 --alsologtostderr --binary-mirror http://127.0.0.1:33593 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-844379   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │                     │
	│ delete  │ -p binary-mirror-844379                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-844379   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ addons  │ enable dashboard -p addons-903947                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │                     │
	│ addons  │ disable dashboard -p addons-903947                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │                     │
	│ start   │ -p addons-903947 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:55 UTC │
	│ addons  │ addons-903947 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ enable headlamp -p addons-903947 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ ip      │ addons-903947 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │ 10 Dec 25 23:55 UTC │
	│ addons  │ addons-903947 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ ssh     │ addons-903947 ssh cat /opt/local-path-provisioner/pvc-45afea10-2a67-458f-9aae-5cd553cc1102_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │ 10 Dec 25 23:55 UTC │
	│ addons  │ addons-903947 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ ssh     │ addons-903947 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:56 UTC │                     │
	│ addons  │ addons-903947 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:56 UTC │                     │
	│ addons  │ addons-903947 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:56 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-903947                                                                                                                                                                                                                                                                                                                                                                                           │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:56 UTC │ 10 Dec 25 23:56 UTC │
	│ addons  │ addons-903947 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:56 UTC │                     │
	│ ip      │ addons-903947 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:58 UTC │ 10 Dec 25 23:58 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:52:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:52:19.171327    5874 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:52:19.171555    5874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:52:19.171566    5874 out.go:374] Setting ErrFile to fd 2...
	I1210 23:52:19.171571    5874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:52:19.171874    5874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:52:19.172385    5874 out.go:368] Setting JSON to false
	I1210 23:52:19.173165    5874 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":226,"bootTime":1765410514,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 23:52:19.173233    5874 start.go:143] virtualization:  
	I1210 23:52:19.176675    5874 out.go:179] * [addons-903947] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 23:52:19.180376    5874 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:52:19.180448    5874 notify.go:221] Checking for updates...
	I1210 23:52:19.186181    5874 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:52:19.189115    5874 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1210 23:52:19.192049    5874 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1210 23:52:19.194924    5874 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 23:52:19.197881    5874 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:52:19.201064    5874 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:52:19.223142    5874 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 23:52:19.223254    5874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:52:19.285912    5874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:52:19.276692644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:52:19.286014    5874 docker.go:319] overlay module found
	I1210 23:52:19.289042    5874 out.go:179] * Using the docker driver based on user configuration
	I1210 23:52:19.291816    5874 start.go:309] selected driver: docker
	I1210 23:52:19.291834    5874 start.go:927] validating driver "docker" against <nil>
	I1210 23:52:19.291857    5874 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:52:19.292626    5874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:52:19.350419    5874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:52:19.341133695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:52:19.350568    5874 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:52:19.350811    5874 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:52:19.353825    5874 out.go:179] * Using Docker driver with root privileges
	I1210 23:52:19.356612    5874 cni.go:84] Creating CNI manager for ""
	I1210 23:52:19.356677    5874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:52:19.356698    5874 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:52:19.356775    5874 start.go:353] cluster config:
	{Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:52:19.359940    5874 out.go:179] * Starting "addons-903947" primary control-plane node in "addons-903947" cluster
	I1210 23:52:19.362842    5874 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:52:19.365756    5874 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:52:19.368532    5874 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:52:19.368576    5874 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1210 23:52:19.368588    5874 cache.go:65] Caching tarball of preloaded images
	I1210 23:52:19.368621    5874 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:52:19.368670    5874 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 23:52:19.368681    5874 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:52:19.369019    5874 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/config.json ...
	I1210 23:52:19.369047    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/config.json: {Name:mk735da483e0335fcfbe279682e15fa9f8c8dbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:19.384761    5874 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 23:52:19.384876    5874 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 23:52:19.384895    5874 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 23:52:19.384899    5874 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 23:52:19.384907    5874 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 23:52:19.384911    5874 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1210 23:52:37.081561    5874 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 23:52:37.081597    5874 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:52:37.081636    5874 start.go:360] acquireMachinesLock for addons-903947: {Name:mk0f48a093bb9740038890b789e7cac9483bde49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:52:37.081764    5874 start.go:364] duration metric: took 105.622µs to acquireMachinesLock for "addons-903947"
	I1210 23:52:37.081789    5874 start.go:93] Provisioning new machine with config: &{Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:52:37.081865    5874 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:52:37.085242    5874 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 23:52:37.085477    5874 start.go:159] libmachine.API.Create for "addons-903947" (driver="docker")
	I1210 23:52:37.085510    5874 client.go:173] LocalClient.Create starting
	I1210 23:52:37.085621    5874 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem
	I1210 23:52:37.137587    5874 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem
	I1210 23:52:37.237801    5874 cli_runner.go:164] Run: docker network inspect addons-903947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:52:37.253431    5874 cli_runner.go:211] docker network inspect addons-903947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:52:37.253528    5874 network_create.go:284] running [docker network inspect addons-903947] to gather additional debugging logs...
	I1210 23:52:37.253552    5874 cli_runner.go:164] Run: docker network inspect addons-903947
	W1210 23:52:37.269123    5874 cli_runner.go:211] docker network inspect addons-903947 returned with exit code 1
	I1210 23:52:37.269153    5874 network_create.go:287] error running [docker network inspect addons-903947]: docker network inspect addons-903947: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-903947 not found
	I1210 23:52:37.269167    5874 network_create.go:289] output of [docker network inspect addons-903947]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-903947 not found
	
	** /stderr **
	I1210 23:52:37.269262    5874 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:52:37.286360    5874 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30c90}
	I1210 23:52:37.286403    5874 network_create.go:124] attempt to create docker network addons-903947 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 23:52:37.286461    5874 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-903947 addons-903947
	I1210 23:52:37.347995    5874 network_create.go:108] docker network addons-903947 192.168.49.0/24 created
	I1210 23:52:37.348035    5874 kic.go:121] calculated static IP "192.168.49.2" for the "addons-903947" container
	I1210 23:52:37.348108    5874 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:52:37.363340    5874 cli_runner.go:164] Run: docker volume create addons-903947 --label name.minikube.sigs.k8s.io=addons-903947 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:52:37.380912    5874 oci.go:103] Successfully created a docker volume addons-903947
	I1210 23:52:37.380993    5874 cli_runner.go:164] Run: docker run --rm --name addons-903947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-903947 --entrypoint /usr/bin/test -v addons-903947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:52:39.781978    5874 cli_runner.go:217] Completed: docker run --rm --name addons-903947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-903947 --entrypoint /usr/bin/test -v addons-903947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (2.400944771s)
	I1210 23:52:39.782025    5874 oci.go:107] Successfully prepared a docker volume addons-903947
	I1210 23:52:39.782078    5874 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:52:39.782094    5874 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:52:39.782158    5874 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-903947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:52:43.750235    5874 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-903947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.968039797s)
	I1210 23:52:43.750266    5874 kic.go:203] duration metric: took 3.968169609s to extract preloaded images to volume ...
	W1210 23:52:43.750418    5874 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 23:52:43.750531    5874 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:52:43.795767    5874 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-903947 --name addons-903947 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-903947 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-903947 --network addons-903947 --ip 192.168.49.2 --volume addons-903947:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:52:44.100768    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Running}}
	I1210 23:52:44.123621    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:52:44.150902    5874 cli_runner.go:164] Run: docker exec addons-903947 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:52:44.216972    5874 oci.go:144] the created container "addons-903947" has a running status.
	I1210 23:52:44.216999    5874 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa...
	I1210 23:52:44.713849    5874 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:52:44.743012    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:52:44.777439    5874 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:52:44.777460    5874 kic_runner.go:114] Args: [docker exec --privileged addons-903947 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:52:44.847314    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:52:44.869522    5874 machine.go:94] provisionDockerMachine start ...
	I1210 23:52:44.869632    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:44.890497    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:44.890904    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:44.890918    5874 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:52:45.131580    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-903947
	
	I1210 23:52:45.131608    5874 ubuntu.go:182] provisioning hostname "addons-903947"
	I1210 23:52:45.131696    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:45.170220    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:45.170571    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:45.170583    5874 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-903947 && echo "addons-903947" | sudo tee /etc/hostname
	I1210 23:52:45.380657    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-903947
	
	I1210 23:52:45.380810    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:45.400919    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:45.401221    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:45.401235    5874 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-903947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-903947/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-903947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:52:45.559167    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:52:45.559192    5874 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1210 23:52:45.559210    5874 ubuntu.go:190] setting up certificates
	I1210 23:52:45.559219    5874 provision.go:84] configureAuth start
	I1210 23:52:45.559280    5874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-903947
	I1210 23:52:45.578029    5874 provision.go:143] copyHostCerts
	I1210 23:52:45.578114    5874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1210 23:52:45.578244    5874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1210 23:52:45.578313    5874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1210 23:52:45.578373    5874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.addons-903947 san=[127.0.0.1 192.168.49.2 addons-903947 localhost minikube]
	I1210 23:52:45.916699    5874 provision.go:177] copyRemoteCerts
	I1210 23:52:45.916763    5874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:52:45.916816    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:45.933795    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.039308    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 23:52:46.057595    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 23:52:46.075589    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1210 23:52:46.096027    5874 provision.go:87] duration metric: took 536.784465ms to configureAuth
	I1210 23:52:46.096060    5874 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:52:46.096257    5874 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:52:46.096370    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.113931    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:46.114248    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:46.114271    5874 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:52:46.422480    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:52:46.422547    5874 machine.go:97] duration metric: took 1.553005215s to provisionDockerMachine
	I1210 23:52:46.422576    5874 client.go:176] duration metric: took 9.337058875s to LocalClient.Create
	I1210 23:52:46.422602    5874 start.go:167] duration metric: took 9.33712369s to libmachine.API.Create "addons-903947"
	I1210 23:52:46.422646    5874 start.go:293] postStartSetup for "addons-903947" (driver="docker")
	I1210 23:52:46.422671    5874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:52:46.422783    5874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:52:46.422893    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.440488    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.542785    5874 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:52:46.545915    5874 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:52:46.545945    5874 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:52:46.545957    5874 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1210 23:52:46.546025    5874 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1210 23:52:46.546054    5874 start.go:296] duration metric: took 123.389166ms for postStartSetup
	I1210 23:52:46.546371    5874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-903947
	I1210 23:52:46.563079    5874 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/config.json ...
	I1210 23:52:46.563364    5874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:52:46.563424    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.579790    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.684062    5874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:52:46.688803    5874 start.go:128] duration metric: took 9.606923824s to createHost
	I1210 23:52:46.688831    5874 start.go:83] releasing machines lock for "addons-903947", held for 9.607057722s
	I1210 23:52:46.688925    5874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-903947
	I1210 23:52:46.706447    5874 ssh_runner.go:195] Run: cat /version.json
	I1210 23:52:46.706499    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.706510    5874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:52:46.706578    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.730523    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.744965    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.932909    5874 ssh_runner.go:195] Run: systemctl --version
	I1210 23:52:46.940326    5874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:52:46.976860    5874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:52:46.981131    5874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:52:46.981229    5874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:52:47.017089    5874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 23:52:47.017155    5874 start.go:496] detecting cgroup driver to use...
	I1210 23:52:47.017197    5874 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 23:52:47.017259    5874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:52:47.035334    5874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:52:47.048494    5874 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:52:47.048592    5874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:52:47.066571    5874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:52:47.085791    5874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:52:47.202792    5874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:52:47.329399    5874 docker.go:234] disabling docker service ...
	I1210 23:52:47.329463    5874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:52:47.350434    5874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:52:47.363054    5874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:52:47.491319    5874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:52:47.609495    5874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:52:47.621991    5874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:52:47.636080    5874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:52:47.636192    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.644922    5874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 23:52:47.645046    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.653723    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.662629    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.672291    5874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:52:47.680508    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.689319    5874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.702465    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.711280    5874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:52:47.718542    5874 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 23:52:47.718607    5874 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 23:52:47.732522    5874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:52:47.739715    5874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:52:47.849931    5874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:52:48.011832    5874 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:52:48.011943    5874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:52:48.016994    5874 start.go:564] Will wait 60s for crictl version
	I1210 23:52:48.017064    5874 ssh_runner.go:195] Run: which crictl
	I1210 23:52:48.021861    5874 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:52:48.059679    5874 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:52:48.059815    5874 ssh_runner.go:195] Run: crio --version
	I1210 23:52:48.087926    5874 ssh_runner.go:195] Run: crio --version
	I1210 23:52:48.124319    5874 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:52:48.127263    5874 cli_runner.go:164] Run: docker network inspect addons-903947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:52:48.145181    5874 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 23:52:48.149265    5874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:52:48.159430    5874 kubeadm.go:884] updating cluster {Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:52:48.159555    5874 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:52:48.159625    5874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:52:48.195766    5874 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:52:48.195792    5874 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:52:48.195853    5874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:52:48.221110    5874 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:52:48.221135    5874 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:52:48.221145    5874 kubeadm.go:935] updating node { 192.168.49.2  8443 v1.34.2 crio true true} ...
	I1210 23:52:48.221280    5874 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-903947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:52:48.221381    5874 ssh_runner.go:195] Run: crio config
	I1210 23:52:48.305414    5874 cni.go:84] Creating CNI manager for ""
	I1210 23:52:48.305439    5874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:52:48.305485    5874 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:52:48.305516    5874 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-903947 NodeName:addons-903947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:52:48.305661    5874 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-903947"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:52:48.305738    5874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:52:48.313957    5874 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:52:48.314056    5874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:52:48.322443    5874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 23:52:48.336230    5874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:52:48.349795    5874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1210 23:52:48.363638    5874 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:52:48.367813    5874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:52:48.377828    5874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:52:48.488017    5874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:52:48.504798    5874 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947 for IP: 192.168.49.2
	I1210 23:52:48.504869    5874 certs.go:195] generating shared ca certs ...
	I1210 23:52:48.504902    5874 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:48.505103    5874 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1210 23:52:48.782161    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt ...
	I1210 23:52:48.782194    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt: {Name:mk00facc681767994d91bec52ecd40e1bc33b2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:48.782386    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key ...
	I1210 23:52:48.782398    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key: {Name:mk64669b159fea61000d44e52eed549edb0ea9c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:48.782485    5874 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1210 23:52:49.040690    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt ...
	I1210 23:52:49.040722    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt: {Name:mkac3ad3424e7224ff419ab1f6473c38ac84d334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.040903    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key ...
	I1210 23:52:49.040915    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key: {Name:mk5e394e666c47a3926fb73141c0949c6c354e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.041006    5874 certs.go:257] generating profile certs ...
	I1210 23:52:49.041071    5874 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.key
	I1210 23:52:49.041090    5874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt with IP's: []
	I1210 23:52:49.436571    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt ...
	I1210 23:52:49.436603    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: {Name:mk2d5db386af54c15b534b90c077692db22543e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.436790    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.key ...
	I1210 23:52:49.436805    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.key: {Name:mk062f15586ea5f3e20f9e75be1c031b4c125750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.436893    5874 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf
	I1210 23:52:49.436916    5874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 23:52:49.545047    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf ...
	I1210 23:52:49.545079    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf: {Name:mka148db7dac5b0b29e61829b4c3b03dc742039b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.545259    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf ...
	I1210 23:52:49.545273    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf: {Name:mkcbeee4881c4e4210e1bf5857f5e5e22b5464b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.545362    5874 certs.go:382] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt
	I1210 23:52:49.545449    5874 certs.go:386] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key
	I1210 23:52:49.545504    5874 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key
	I1210 23:52:49.545525    5874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt with IP's: []
	I1210 23:52:49.872237    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt ...
	I1210 23:52:49.872269    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt: {Name:mk4ab41a5dc3ab83e488b3a97ff6d97cb4b3baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.872444    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key ...
	I1210 23:52:49.872457    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key: {Name:mk30743ce08c26a9ec595be5daa0df8f1c130c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.872647    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 23:52:49.872691    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1210 23:52:49.872716    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:52:49.872747    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1210 23:52:49.873302    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:52:49.891544    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 23:52:49.909669    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:52:49.928132    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 23:52:49.945094    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 23:52:49.961677    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:52:49.978650    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:52:49.995594    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:52:50.022483    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:52:50.042350    5874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:52:50.056207    5874 ssh_runner.go:195] Run: openssl version
	I1210 23:52:50.062643    5874 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.070748    5874 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:52:50.078476    5874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.082396    5874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.082467    5874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.125465    5874 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:52:50.132999    5874 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:52:50.140314    5874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:52:50.143908    5874 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:52:50.143957    5874 kubeadm.go:401] StartCluster: {Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:52:50.144043    5874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:52:50.144103    5874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:52:50.185455    5874 cri.go:89] found id: ""
	I1210 23:52:50.185577    5874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:52:50.195454    5874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:52:50.204154    5874 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:52:50.204220    5874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:52:50.213136    5874 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:52:50.213155    5874 kubeadm.go:158] found existing configuration files:
	
	I1210 23:52:50.213206    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:52:50.221013    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:52:50.221083    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:52:50.228288    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:52:50.235782    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:52:50.235870    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:52:50.243155    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:52:50.250940    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:52:50.251021    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:52:50.258109    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:52:50.265354    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:52:50.265466    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:52:50.272852    5874 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:52:50.316765    5874 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:52:50.317132    5874 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:52:50.343525    5874 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:52:50.343709    5874 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 23:52:50.343781    5874 kubeadm.go:319] OS: Linux
	I1210 23:52:50.343859    5874 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:52:50.343937    5874 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 23:52:50.344014    5874 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:52:50.344093    5874 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:52:50.344170    5874 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:52:50.344249    5874 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:52:50.344326    5874 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:52:50.344402    5874 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:52:50.344481    5874 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 23:52:50.407506    5874 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:52:50.407654    5874 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:52:50.407750    5874 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:52:50.416946    5874 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 23:52:50.423179    5874 out.go:252]   - Generating certificates and keys ...
	I1210 23:52:50.423281    5874 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:52:50.423359    5874 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:52:50.531787    5874 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:52:51.285198    5874 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:52:51.541175    5874 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:52:52.249793    5874 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:52:54.278419    5874 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:52:54.278561    5874 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-903947 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 23:52:55.481575    5874 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:52:55.481934    5874 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-903947 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 23:52:56.164698    5874 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:52:56.491289    5874 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:52:56.727823    5874 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:52:56.728287    5874 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:52:56.969966    5874 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:52:57.577566    5874 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:52:58.158250    5874 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:52:59.244577    5874 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:52:59.464697    5874 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:52:59.465275    5874 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:52:59.467870    5874 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:52:59.471335    5874 out.go:252]   - Booting up control plane ...
	I1210 23:52:59.471433    5874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:52:59.471510    5874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:52:59.471577    5874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:52:59.486123    5874 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:52:59.486435    5874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:52:59.496279    5874 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:52:59.496679    5874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:52:59.496884    5874 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:52:59.629115    5874 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:52:59.629234    5874 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:53:00.631315    5874 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00123898s
	I1210 23:53:00.633901    5874 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:53:00.633992    5874 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 23:53:00.634082    5874 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:53:00.634169    5874 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:53:04.138739    5874 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.504313587s
	I1210 23:53:04.763078    5874 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.129092459s
	I1210 23:53:06.635865    5874 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001720356s
	I1210 23:53:06.670334    5874 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:53:06.689297    5874 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:53:06.707629    5874 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:53:06.707851    5874 kubeadm.go:319] [mark-control-plane] Marking the node addons-903947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:53:06.724791    5874 kubeadm.go:319] [bootstrap-token] Using token: uj2agy.orhyfdtqxpvj5c65
	I1210 23:53:06.727780    5874 out.go:252]   - Configuring RBAC rules ...
	I1210 23:53:06.727920    5874 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:53:06.742259    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:53:06.751958    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:53:06.756255    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:53:06.762800    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:53:06.769210    5874 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:53:07.042944    5874 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:53:07.509699    5874 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:53:08.042580    5874 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:53:08.043776    5874 kubeadm.go:319] 
	I1210 23:53:08.043860    5874 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:53:08.043870    5874 kubeadm.go:319] 
	I1210 23:53:08.043947    5874 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:53:08.043963    5874 kubeadm.go:319] 
	I1210 23:53:08.043989    5874 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:53:08.044055    5874 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:53:08.044112    5874 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:53:08.044117    5874 kubeadm.go:319] 
	I1210 23:53:08.044171    5874 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:53:08.044178    5874 kubeadm.go:319] 
	I1210 23:53:08.044226    5874 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:53:08.044234    5874 kubeadm.go:319] 
	I1210 23:53:08.044286    5874 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:53:08.044363    5874 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:53:08.044435    5874 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:53:08.044443    5874 kubeadm.go:319] 
	I1210 23:53:08.044528    5874 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:53:08.044608    5874 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:53:08.044616    5874 kubeadm.go:319] 
	I1210 23:53:08.044700    5874 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uj2agy.orhyfdtqxpvj5c65 \
	I1210 23:53:08.044808    5874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:695d64cb3b1088e1978dc6911e99e852648cf50ad98520ca6a673e7aef325366 \
	I1210 23:53:08.044832    5874 kubeadm.go:319] 	--control-plane 
	I1210 23:53:08.044839    5874 kubeadm.go:319] 
	I1210 23:53:08.044924    5874 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:53:08.044932    5874 kubeadm.go:319] 
	I1210 23:53:08.045014    5874 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uj2agy.orhyfdtqxpvj5c65 \
	I1210 23:53:08.045120    5874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:695d64cb3b1088e1978dc6911e99e852648cf50ad98520ca6a673e7aef325366 
	I1210 23:53:08.049306    5874 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 23:53:08.049521    5874 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 23:53:08.049625    5874 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:53:08.049654    5874 cni.go:84] Creating CNI manager for ""
	I1210 23:53:08.049666    5874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:53:08.052819    5874 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:53:08.055756    5874 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:53:08.060529    5874 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:53:08.060550    5874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:53:08.075357    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:53:08.360154    5874 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:53:08.360292    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:08.360364    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-903947 minikube.k8s.io/updated_at=2025_12_10T23_53_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=addons-903947 minikube.k8s.io/primary=true
	I1210 23:53:08.506428    5874 ops.go:34] apiserver oom_adj: -16
	I1210 23:53:08.506539    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:09.007935    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:09.506696    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:10.007501    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:10.506700    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:11.006722    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:11.506729    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:12.010941    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:12.506998    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:13.008184    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:13.097545    5874 kubeadm.go:1114] duration metric: took 4.737294928s to wait for elevateKubeSystemPrivileges
	I1210 23:53:13.097576    5874 kubeadm.go:403] duration metric: took 22.953620046s to StartCluster
	I1210 23:53:13.097593    5874 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:53:13.097699    5874 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1210 23:53:13.098118    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:53:13.098320    5874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:53:13.098328    5874 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:53:13.098592    5874 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:53:13.098635    5874 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 23:53:13.098722    5874 addons.go:70] Setting yakd=true in profile "addons-903947"
	I1210 23:53:13.098735    5874 addons.go:239] Setting addon yakd=true in "addons-903947"
	I1210 23:53:13.098756    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.099262    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.099399    5874 addons.go:70] Setting inspektor-gadget=true in profile "addons-903947"
	I1210 23:53:13.099418    5874 addons.go:239] Setting addon inspektor-gadget=true in "addons-903947"
	I1210 23:53:13.099448    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.099880    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.100352    5874 addons.go:70] Setting metrics-server=true in profile "addons-903947"
	I1210 23:53:13.100380    5874 addons.go:239] Setting addon metrics-server=true in "addons-903947"
	I1210 23:53:13.100439    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.100880    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.103123    5874 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-903947"
	I1210 23:53:13.103155    5874 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-903947"
	I1210 23:53:13.103185    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.103676    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.106876    5874 addons.go:70] Setting registry=true in profile "addons-903947"
	I1210 23:53:13.106903    5874 addons.go:239] Setting addon registry=true in "addons-903947"
	I1210 23:53:13.106941    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.107416    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.110108    5874 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-903947"
	I1210 23:53:13.110188    5874 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-903947"
	I1210 23:53:13.110250    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.110785    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.122380    5874 addons.go:70] Setting cloud-spanner=true in profile "addons-903947"
	I1210 23:53:13.122463    5874 addons.go:239] Setting addon cloud-spanner=true in "addons-903947"
	I1210 23:53:13.122511    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.123155    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.130658    5874 out.go:179] * Verifying Kubernetes components...
	I1210 23:53:13.140161    5874 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-903947"
	I1210 23:53:13.140509    5874 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-903947"
	I1210 23:53:13.142287    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.144086    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140348    5874 addons.go:70] Setting default-storageclass=true in profile "addons-903947"
	I1210 23:53:13.144420    5874 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-903947"
	I1210 23:53:13.151533    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.122399    5874 addons.go:70] Setting storage-provisioner=true in profile "addons-903947"
	I1210 23:53:13.154140    5874 addons.go:239] Setting addon storage-provisioner=true in "addons-903947"
	I1210 23:53:13.154231    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.154901    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.159075    5874 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 23:53:13.122408    5874 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-903947"
	I1210 23:53:13.159399    5874 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-903947"
	I1210 23:53:13.159727    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.172593    5874 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 23:53:13.172666    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 23:53:13.172770    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.122420    5874 addons.go:70] Setting volcano=true in profile "addons-903947"
	I1210 23:53:13.177779    5874 addons.go:239] Setting addon volcano=true in "addons-903947"
	I1210 23:53:13.177826    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.178281    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.122427    5874 addons.go:70] Setting volumesnapshots=true in profile "addons-903947"
	I1210 23:53:13.188463    5874 addons.go:239] Setting addon volumesnapshots=true in "addons-903947"
	I1210 23:53:13.188596    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.197937    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140357    5874 addons.go:70] Setting gcp-auth=true in profile "addons-903947"
	I1210 23:53:13.219082    5874 mustload.go:66] Loading cluster: addons-903947
	I1210 23:53:13.219321    5874 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:53:13.219607    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140361    5874 addons.go:70] Setting ingress=true in profile "addons-903947"
	I1210 23:53:13.234736    5874 addons.go:239] Setting addon ingress=true in "addons-903947"
	I1210 23:53:13.234792    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.235579    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140364    5874 addons.go:70] Setting ingress-dns=true in profile "addons-903947"
	I1210 23:53:13.249274    5874 addons.go:239] Setting addon ingress-dns=true in "addons-903947"
	I1210 23:53:13.249341    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.249871    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.295442    5874 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 23:53:13.122379    5874 addons.go:70] Setting registry-creds=true in profile "addons-903947"
	I1210 23:53:13.296431    5874 addons.go:239] Setting addon registry-creds=true in "addons-903947"
	I1210 23:53:13.296486    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.297209    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.302719    5874 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 23:53:13.302750    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 23:53:13.302818    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.140674    5874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:53:13.323114    5874 addons.go:239] Setting addon default-storageclass=true in "addons-903947"
	I1210 23:53:13.323151    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.323576    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.333501    5874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1210 23:53:13.333839    5874 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 23:53:13.338041    5874 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 23:53:13.338328    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.340886    5874 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 23:53:13.341001    5874 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 23:53:13.343006    5874 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 23:53:13.343160    5874 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:53:13.343315    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 23:53:13.349200    5874 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 23:53:13.349234    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 23:53:13.349327    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.349352    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 23:53:13.349366    5874 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 23:53:13.349412    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.351412    5874 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:53:13.354024    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:53:13.354176    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.367537    5874 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 23:53:13.367561    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 23:53:13.367633    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.353974    5874 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 23:53:13.370814    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 23:53:13.370878    5874 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 23:53:13.371003    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.353984    5874 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 23:53:13.403092    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 23:53:13.404300    5874 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 23:53:13.404321    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 23:53:13.404393    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.411980    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.414602    5874 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-903947"
	I1210 23:53:13.414647    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.415215    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.415507    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 23:53:13.419668    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 23:53:13.445130    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 23:53:13.448982    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 23:53:13.451873    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 23:53:13.455223    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 23:53:13.458108    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 23:53:13.463018    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 23:53:13.465956    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 23:53:13.466075    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 23:53:13.466089    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 23:53:13.466184    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.489753    5874 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 23:53:13.489775    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 23:53:13.489862    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.490157    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 23:53:13.506148    5874 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 23:53:13.506413    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 23:53:13.506427    5874 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 23:53:13.506512    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.513696    5874 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 23:53:13.514002    5874 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 23:53:13.514047    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 23:53:13.514116    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.521528    5874 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 23:53:13.521563    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 23:53:13.521634    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.543598    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.552022    5874 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:53:13.552048    5874 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:53:13.552110    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.595514    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.640723    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.644435    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.651092    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.672442    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.680232    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.720110    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.726258    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.726843    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.737110    5874 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 23:53:13.742038    5874 out.go:179]   - Using image docker.io/busybox:stable
	I1210 23:53:13.743075    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.745407    5874 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 23:53:13.745427    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 23:53:13.745489    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.750281    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.761284    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.788774    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.801254    5874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:53:14.110789    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 23:53:14.174478    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 23:53:14.178746    5874 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 23:53:14.178770    5874 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 23:53:14.266552    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 23:53:14.266575    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 23:53:14.290521    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:53:14.293159    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 23:53:14.293182    5874 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 23:53:14.308904    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 23:53:14.314191    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 23:53:14.319593    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 23:53:14.344686    5874 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 23:53:14.344711    5874 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 23:53:14.358097    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 23:53:14.364068    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 23:53:14.368378    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:53:14.405685    5874 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 23:53:14.405719    5874 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 23:53:14.416445    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 23:53:14.416525    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 23:53:14.416828    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 23:53:14.416877    5874 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 23:53:14.425231    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 23:53:14.427477    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 23:53:14.427540    5874 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 23:53:14.485562    5874 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 23:53:14.485632    5874 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 23:53:14.533988    5874 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 23:53:14.534057    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 23:53:14.536289    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 23:53:14.536361    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 23:53:14.575790    5874 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.242256988s)
	I1210 23:53:14.575870    5874 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 23:53:14.577167    5874 node_ready.go:35] waiting up to 6m0s for node "addons-903947" to be "Ready" ...
	I1210 23:53:14.584671    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 23:53:14.584743    5874 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 23:53:14.647076    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 23:53:14.647146    5874 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 23:53:14.663790    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 23:53:14.663864    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 23:53:14.666904    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 23:53:14.666985    5874 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 23:53:14.703011    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 23:53:14.803619    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 23:53:14.803691    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 23:53:14.822819    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 23:53:14.822893    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 23:53:14.833514    5874 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 23:53:14.833581    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 23:53:14.836184    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 23:53:14.985017    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 23:53:15.007940    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 23:53:15.015735    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 23:53:15.015822    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 23:53:15.083968    5874 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-903947" context rescaled to 1 replicas
	I1210 23:53:15.262167    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 23:53:15.262239    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 23:53:15.444287    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 23:53:15.444314    5874 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 23:53:15.615295    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 23:53:15.615383    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 23:53:15.900575    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 23:53:15.900647    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 23:53:16.139731    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 23:53:16.139810    5874 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 23:53:16.375722    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1210 23:53:16.587952    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:17.345465    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.170932567s)
	I1210 23:53:17.687900    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.397342619s)
	W1210 23:53:18.589928    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:19.012542    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.698312518s)
	I1210 23:53:19.012670    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.654546341s)
	I1210 23:53:19.012755    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.648663184s)
	I1210 23:53:19.012800    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.644399449s)
	I1210 23:53:19.013114    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.587822015s)
	I1210 23:53:19.013267    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.310186654s)
	I1210 23:53:19.013288    5874 addons.go:495] Verifying addon registry=true in "addons-903947"
	I1210 23:53:19.013502    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.692985747s)
	I1210 23:53:19.013746    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.177488801s)
	I1210 23:53:19.013762    5874 addons.go:495] Verifying addon metrics-server=true in "addons-903947"
	I1210 23:53:19.013829    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.028733845s)
	I1210 23:53:19.013850    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.005837741s)
	W1210 23:53:19.013857    5874 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 23:53:19.013895    5874 retry.go:31] will retry after 167.822431ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 23:53:19.014390    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.705457263s)
	I1210 23:53:19.014417    5874 addons.go:495] Verifying addon ingress=true in "addons-903947"
	I1210 23:53:19.016850    5874 out.go:179] * Verifying registry addon...
	I1210 23:53:19.018724    5874 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-903947 service yakd-dashboard -n yakd-dashboard
	
	I1210 23:53:19.018875    5874 out.go:179] * Verifying ingress addon...
	I1210 23:53:19.022379    5874 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 23:53:19.023375    5874 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 23:53:19.038009    5874 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 23:53:19.038134    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:19.038118    5874 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 23:53:19.038203    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1210 23:53:19.044676    5874 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 23:53:19.181993    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 23:53:19.296452    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.920671667s)
	I1210 23:53:19.296491    5874 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-903947"
	I1210 23:53:19.299581    5874 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 23:53:19.304083    5874 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 23:53:19.316891    5874 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 23:53:19.316920    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:19.532243    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:19.532626    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:19.807378    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:20.031720    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:20.031939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:20.307916    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:20.526948    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:20.527573    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:20.808057    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:21.022297    5874 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 23:53:21.022387    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:21.030797    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:21.031685    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:21.041487    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	W1210 23:53:21.085046    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:21.156289    5874 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 23:53:21.173690    5874 addons.go:239] Setting addon gcp-auth=true in "addons-903947"
	I1210 23:53:21.173785    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:21.174244    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:21.191559    5874 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 23:53:21.191614    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:21.208835    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:21.309064    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:21.527380    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:21.527904    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:21.808769    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:21.908302    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.726264863s)
	I1210 23:53:21.911159    5874 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 23:53:21.914149    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 23:53:21.917047    5874 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 23:53:21.917083    5874 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 23:53:21.933719    5874 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 23:53:21.933741    5874 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 23:53:21.946869    5874 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 23:53:21.946896    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 23:53:21.960107    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 23:53:22.027956    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:22.028354    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:22.307630    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:22.484550    5874 addons.go:495] Verifying addon gcp-auth=true in "addons-903947"
	I1210 23:53:22.487652    5874 out.go:179] * Verifying gcp-auth addon...
	I1210 23:53:22.491402    5874 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 23:53:22.497579    5874 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 23:53:22.497603    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:22.525994    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:22.527153    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:22.807554    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:22.995079    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:23.026860    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:23.027121    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:23.306790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:23.499454    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:23.530907    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:23.531522    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:23.579977    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:23.806879    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:23.994760    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:24.025869    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:24.028315    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:24.307445    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:24.494461    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:24.526881    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:24.527270    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:24.807944    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:24.994927    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:25.026153    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:25.027044    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:25.308242    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:25.494575    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:25.525340    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:25.526725    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:25.580197    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:25.807206    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:25.993990    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:26.026584    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:26.026939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:26.307177    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:26.495065    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:26.526186    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:26.526425    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:26.807939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:26.995274    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:27.026747    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:27.026822    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:27.307270    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:27.495499    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:27.526573    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:27.526731    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1210 23:53:27.580468    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:27.807613    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:27.994352    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:28.026839    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:28.027315    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:28.307256    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:28.495111    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:28.530591    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:28.530654    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:28.807737    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:28.994599    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:29.025860    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:29.025999    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:29.307549    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:29.494883    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:29.525435    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:29.526807    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:29.807563    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:29.994394    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:30.030608    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:30.030822    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:30.080726    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:30.307728    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:30.494602    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:30.525224    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:30.527003    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:30.807375    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:30.993999    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:31.025686    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:31.026587    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:31.306853    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:31.495026    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:31.526075    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:31.526217    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:31.806659    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:31.994576    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:32.025482    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:32.026934    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:32.308724    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:32.495119    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:32.525459    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:32.526616    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:32.580350    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:32.807759    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:32.994574    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:33.027259    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:33.027368    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:33.307559    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:33.495574    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:33.525351    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:33.526678    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:33.807433    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:33.994208    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:34.027065    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:34.027502    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:34.307166    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:34.495159    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:34.526473    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:34.526580    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:34.806911    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:34.994691    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:35.026369    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:35.026813    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:35.080641    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:35.307620    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:35.494942    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:35.595881    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:35.596242    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:35.807075    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:35.995176    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:36.026554    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:36.026652    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:36.307084    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:36.495039    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:36.526450    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:36.526915    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:36.808101    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:36.995302    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:37.025918    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:37.027751    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:37.307090    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:37.495186    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:37.526373    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:37.526512    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:37.580136    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:37.807233    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:37.994790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:38.027078    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:38.029643    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:38.307907    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:38.494859    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:38.525893    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:38.526752    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:38.807539    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:38.994041    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:39.026717    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:39.026803    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:39.308257    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:39.495408    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:39.525207    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:39.526405    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:39.807403    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:39.994102    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:40.027832    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:40.028585    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:40.080619    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:40.307644    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:40.494402    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:40.525395    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:40.526023    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:40.808089    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:40.994759    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:41.025676    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:41.026302    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:41.306912    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:41.495098    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:41.525916    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:41.526958    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:41.807560    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:41.994499    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:42.027245    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:42.027443    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1210 23:53:42.081388    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:42.307979    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:42.495010    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:42.525775    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:42.527645    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:42.807524    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:42.994473    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:43.026693    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:43.027001    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:43.307510    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:43.494598    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:43.525935    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:43.526715    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:43.807390    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:43.993861    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:44.026672    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:44.027686    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:44.307653    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:44.494383    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:44.526473    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:44.526540    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:44.581081    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:44.806824    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:44.994889    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:45.031933    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:45.037033    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:45.307908    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:45.494919    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:45.526686    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:45.526835    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:45.808383    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:45.994410    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:46.025550    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:46.027169    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:46.306627    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:46.495065    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:46.526627    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:46.526815    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:46.581448    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:46.807643    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:46.994956    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:47.025899    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:47.028313    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:47.307370    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:47.494307    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:47.527922    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:47.528004    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:47.807416    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:47.995008    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:48.027049    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:48.027200    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:48.307230    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:48.494079    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:48.526116    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:48.526297    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:48.806810    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:48.994811    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:49.026658    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:49.027047    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:49.080604    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:49.307773    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:49.494701    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:49.525377    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:49.526638    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:49.807191    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:49.995025    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:50.025894    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:50.027931    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:50.306783    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:50.494602    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:50.525492    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:50.525978    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:50.806934    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:50.994866    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:51.025737    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:51.026444    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:51.307314    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:51.494101    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:51.526477    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:51.526584    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:51.580283    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:51.807152    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:51.995393    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:52.026626    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:52.026822    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:52.307529    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:52.494103    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:52.525582    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:52.526857    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:52.807378    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:52.994948    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:53.026135    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:53.025858    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:53.096383    5874 node_ready.go:49] node "addons-903947" is "Ready"
	I1210 23:53:53.096407    5874 node_ready.go:38] duration metric: took 38.519087235s for node "addons-903947" to be "Ready" ...
	I1210 23:53:53.096420    5874 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:53:53.096474    5874 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:53:53.111419    5874 api_server.go:72] duration metric: took 40.013063088s to wait for apiserver process to appear ...
	I1210 23:53:53.111443    5874 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:53:53.111462    5874 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 23:53:53.176983    5874 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 23:53:53.189155    5874 api_server.go:141] control plane version: v1.34.2
	I1210 23:53:53.189229    5874 api_server.go:131] duration metric: took 77.778365ms to wait for apiserver health ...
	I1210 23:53:53.189253    5874 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:53:53.208841    5874 system_pods.go:59] 18 kube-system pods found
	I1210 23:53:53.208921    5874 system_pods.go:61] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending
	I1210 23:53:53.208944    5874 system_pods.go:61] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending
	I1210 23:53:53.208987    5874 system_pods.go:61] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending
	I1210 23:53:53.209015    5874 system_pods.go:61] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.209038    5874 system_pods.go:61] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.209060    5874 system_pods.go:61] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.209093    5874 system_pods.go:61] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.209117    5874 system_pods.go:61] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending
	I1210 23:53:53.209139    5874 system_pods.go:61] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.209161    5874 system_pods.go:61] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.209195    5874 system_pods.go:61] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending
	I1210 23:53:53.209221    5874 system_pods.go:61] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending
	I1210 23:53:53.209243    5874 system_pods.go:61] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending
	I1210 23:53:53.209265    5874 system_pods.go:61] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending
	I1210 23:53:53.209298    5874 system_pods.go:61] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending
	I1210 23:53:53.209324    5874 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending
	I1210 23:53:53.209345    5874 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending
	I1210 23:53:53.209368    5874 system_pods.go:61] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending
	I1210 23:53:53.209404    5874 system_pods.go:74] duration metric: took 20.130213ms to wait for pod list to return data ...
	I1210 23:53:53.209432    5874 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:53:53.213211    5874 default_sa.go:45] found service account: "default"
	I1210 23:53:53.213282    5874 default_sa.go:55] duration metric: took 3.822859ms for default service account to be created ...
	I1210 23:53:53.213305    5874 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:53:53.226845    5874 system_pods.go:86] 18 kube-system pods found
	I1210 23:53:53.226928    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending
	I1210 23:53:53.226948    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending
	I1210 23:53:53.226991    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending
	I1210 23:53:53.227017    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.227039    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.227062    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.227095    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.227127    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending
	I1210 23:53:53.227149    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.227173    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.227205    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending
	I1210 23:53:53.227231    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending
	I1210 23:53:53.227253    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending
	I1210 23:53:53.227277    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending
	I1210 23:53:53.227314    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending
	I1210 23:53:53.227339    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending
	I1210 23:53:53.227363    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending
	I1210 23:53:53.227386    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending
	I1210 23:53:53.227427    5874 retry.go:31] will retry after 256.634442ms: missing components: kube-dns
	I1210 23:53:53.352983    5874 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 23:53:53.353046    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:53.572938    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:53.573029    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:53.573050    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending
	I1210 23:53:53.573090    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending
	I1210 23:53:53.573119    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending
	I1210 23:53:53.573138    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.573158    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.573190    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.573215    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.573239    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:53.573262    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.573297    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.573322    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:53.573344    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending
	I1210 23:53:53.573363    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending
	I1210 23:53:53.573383    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:53.573417    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending
	I1210 23:53:53.573437    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending
	I1210 23:53:53.573459    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending
	I1210 23:53:53.573492    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:53.573532    5874 retry.go:31] will retry after 260.228046ms: missing components: kube-dns
	I1210 23:53:53.598191    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:53.605895    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:53.609782    5874 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 23:53:53.609802    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:53.807704    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:53.842016    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:53.842123    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:53.842165    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 23:53:53.842195    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 23:53:53.842223    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 23:53:53.842248    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.842279    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.842305    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.842325    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.842352    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:53.842384    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.842411    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.842435    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:53.842462    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 23:53:53.842499    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 23:53:53.842531    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:53.842558    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 23:53:53.842580    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:53.842614    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:53.842644    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:53.842676    5874 retry.go:31] will retry after 426.208397ms: missing components: kube-dns
	I1210 23:53:53.996079    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:54.100327    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:54.100548    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:54.273626    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:54.273660    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:54.273670    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 23:53:54.273678    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 23:53:54.273686    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 23:53:54.273691    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:54.273695    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:54.273699    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:54.273703    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:54.273709    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:54.273715    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:54.273719    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:54.273724    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:54.273731    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 23:53:54.273737    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 23:53:54.273744    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:54.273750    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 23:53:54.273757    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.273768    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.273774    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:54.273789    5874 retry.go:31] will retry after 419.085032ms: missing components: kube-dns
	I1210 23:53:54.307962    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:54.495395    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:54.526660    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:54.528512    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:54.709010    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:54.709098    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:54.709123    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 23:53:54.709166    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 23:53:54.709196    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 23:53:54.709221    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:54.709245    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:54.709278    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:54.709307    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:54.709333    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:54.709351    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:54.709384    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:54.709413    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:54.709439    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 23:53:54.709463    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 23:53:54.709499    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:54.709530    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 23:53:54.709556    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.709581    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.709616    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:54.709649    5874 system_pods.go:126] duration metric: took 1.496323695s to wait for k8s-apps to be running ...
	I1210 23:53:54.709673    5874 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:53:54.709756    5874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:53:54.738078    5874 system_svc.go:56] duration metric: took 28.396128ms WaitForService to wait for kubelet
	I1210 23:53:54.738146    5874 kubeadm.go:587] duration metric: took 41.639794515s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:53:54.738181    5874 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:53:54.804466    5874 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 23:53:54.804543    5874 node_conditions.go:123] node cpu capacity is 2
	I1210 23:53:54.804572    5874 node_conditions.go:105] duration metric: took 66.367875ms to run NodePressure ...
	I1210 23:53:54.804599    5874 start.go:242] waiting for startup goroutines ...
	I1210 23:53:54.839932    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:54.999606    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:55.027546    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:55.029060    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:55.307307    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:55.493929    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:55.525566    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:55.527584    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:55.811431    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:55.997429    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:56.032989    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:56.035248    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:56.307944    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:56.495837    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:56.528020    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:56.528255    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:56.808290    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:56.994430    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:57.095442    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:57.095520    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:57.308084    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:57.495147    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:57.528023    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:57.528277    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:57.808320    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:57.994621    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:58.030773    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:58.030959    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:58.307857    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:58.495529    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:58.527813    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:58.528615    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:58.815710    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:58.994757    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:59.029639    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:59.031757    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:59.312638    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:59.496484    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:59.528958    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:59.529568    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:59.814132    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:59.994570    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:00.035223    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:00.054266    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:00.328628    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:00.495553    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:00.528947    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:00.529460    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:00.816183    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:00.996153    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:01.096596    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:01.096960    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:01.308035    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:01.494771    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:01.526361    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:01.526820    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:01.808415    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:01.994451    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:02.028096    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:02.031750    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:02.308320    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:02.495548    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:02.527830    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:02.528560    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:02.808390    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:02.995030    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:03.027604    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:03.028103    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:03.308268    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:03.495853    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:03.527790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:03.528878    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:03.808309    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:03.995302    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:04.027271    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:04.027558    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:04.307562    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:04.494506    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:04.526654    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:04.526806    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:04.809070    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:04.995298    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:05.028562    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:05.028647    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:05.308159    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:05.495448    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:05.527328    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:05.527492    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:05.808310    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:05.994296    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:06.030101    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:06.030250    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:06.308452    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:06.494759    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:06.527072    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:06.527355    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:06.808497    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:06.994609    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:07.026038    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:07.029109    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:07.308190    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:07.494381    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:07.526804    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:07.527338    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:07.807487    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:07.994144    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:08.027133    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:08.028136    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:08.307613    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:08.495088    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:08.596749    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:08.596836    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:08.808529    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:08.994455    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:09.028160    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:09.028669    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:09.308791    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:09.495174    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:09.527886    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:09.528258    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:09.808842    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:09.994611    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:10.026933    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:10.027130    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:10.307963    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:10.494701    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:10.527340    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:10.527654    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:10.809980    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:10.994820    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:11.025798    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:11.027585    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:11.308214    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:11.494013    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:11.527562    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:11.528262    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:11.813738    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:11.994572    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:12.027499    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:12.027453    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:12.308366    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:12.495705    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:12.526767    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:12.526802    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:12.808035    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:12.995067    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:13.027126    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:13.027213    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:13.308332    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:13.495216    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:13.529082    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:13.529712    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:13.808805    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:13.994905    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:14.027185    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:14.028744    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:14.310875    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:14.495234    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:14.526525    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:14.527022    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:14.808041    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:14.994862    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:15.027718    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:15.030913    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:15.308505    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:15.494186    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:15.527478    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:15.527773    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:15.808785    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:15.995755    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:16.097790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:16.098001    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:16.319285    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:16.494597    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:16.525696    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:16.527192    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:16.807589    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:16.995054    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:17.027392    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:17.027786    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:17.308276    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:17.495927    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:17.527504    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:17.528580    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:17.808117    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:17.994996    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:18.036514    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:18.036860    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:18.307953    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:18.495179    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:18.527640    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:18.528680    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:18.808257    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:18.995506    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:19.028193    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:19.028740    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:19.307962    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:19.495439    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:19.529260    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:19.529658    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:19.807939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:19.994378    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:20.028143    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:20.028321    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:20.312882    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:20.494207    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:20.526458    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:20.526861    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:20.807796    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:20.995112    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:21.026534    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:21.027331    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:21.309109    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:21.495088    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:21.526289    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:21.526883    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:21.807751    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:21.994895    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:22.096985    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:22.097530    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:22.309040    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:22.494922    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:22.528646    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:22.529144    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:22.811863    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:22.995440    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:23.027256    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:23.028088    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:23.308019    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:23.496022    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:23.526932    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:23.527146    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:23.808638    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:23.994934    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:24.027611    5874 kapi.go:107] duration metric: took 1m5.005227236s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 23:54:24.028004    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:24.308541    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:24.494906    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:24.527371    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:24.808747    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:24.995279    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:25.027882    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:25.307799    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:25.495017    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:25.526690    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:25.807736    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:25.994855    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:26.096241    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:26.307987    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:26.495614    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:26.527176    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:26.808541    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:26.994649    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:27.026648    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:27.308254    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:27.494942    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:27.527509    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:27.808004    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:27.995294    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:28.027751    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:28.307951    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:28.495338    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:28.527967    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:28.808211    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:28.995642    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:29.026746    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:29.307359    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:29.495260    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:29.526993    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:29.807356    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:29.995077    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:30.032070    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:30.308817    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:30.495082    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:30.526229    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:30.807070    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:30.995452    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:31.028257    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:31.308214    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:31.494863    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:31.526592    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:31.807722    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:31.994311    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:32.027517    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:32.310794    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:32.496513    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:32.599856    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:32.808003    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:32.997603    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:33.032585    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:33.309437    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:33.495618    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:33.531023    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:33.808423    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:33.994651    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:34.027620    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:34.307964    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:34.495106    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:34.526697    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:34.808578    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:34.995268    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:35.028026    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:35.307790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:35.494636    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:35.526853    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:35.808255    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:35.995339    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:36.029852    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:36.307270    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:36.494479    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:36.526902    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:36.808523    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:36.994189    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:37.027686    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:37.308726    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:37.495071    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:37.526751    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:37.808732    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:37.994440    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:38.027182    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:38.307962    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:38.495420    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:38.527002    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:38.811652    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:39.014612    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:39.030241    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:39.309417    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:39.494054    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:39.526780    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:39.808721    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:39.996928    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:40.030002    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:40.311877    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:40.497801    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:40.526477    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:40.807259    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:40.994585    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:41.027340    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:41.307314    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:41.494342    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:41.526914    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:41.809439    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:42.013819    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:42.052035    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:42.309610    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:42.495003    5874 kapi.go:107] duration metric: took 1m20.003567592s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 23:54:42.498571    5874 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-903947 cluster.
	I1210 23:54:42.501693    5874 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 23:54:42.505255    5874 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 23:54:42.526874    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:42.808628    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:43.031803    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:43.308516    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:43.527700    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:43.811525    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:44.029735    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:44.308297    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:44.527396    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:44.807673    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:45.033640    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:45.309942    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:45.527426    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:45.808296    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:46.027183    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:46.308137    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:46.528468    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:46.812201    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:47.029158    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:47.308612    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:47.527442    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:47.807698    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:48.030173    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:48.308202    5874 kapi.go:107] duration metric: took 1m29.004121483s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 23:54:48.527023    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:49.026774    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:49.528563    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:50.027771    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:50.527293    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:51.028743    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:51.527201    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:52.027653    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:52.527097    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:53.026603    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:53.526948    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:54.026346    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:54.527054    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:55.026688    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:55.527197    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:56.026287    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:56.526500    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:57.027436    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:57.527314    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:58.027552    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:58.527378    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:59.026523    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:59.526626    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:00.048240    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:00.527394    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:01.027173    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:01.526852    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:02.028074    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:02.529310    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:03.027666    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:03.527323    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:04.027244    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:04.531065    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:05.027607    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:05.528089    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:06.027643    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:06.527641    5874 kapi.go:107] duration metric: took 1m47.50426509s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 23:55:06.530868    5874 out.go:179] * Enabled addons: nvidia-device-plugin, inspektor-gadget, storage-provisioner, cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1210 23:55:06.533824    5874 addons.go:530] duration metric: took 1m53.435187373s for enable addons: enabled=[nvidia-device-plugin inspektor-gadget storage-provisioner cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin metrics-server yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1210 23:55:06.533888    5874 start.go:247] waiting for cluster config update ...
	I1210 23:55:06.533915    5874 start.go:256] writing updated cluster config ...
	I1210 23:55:06.534209    5874 ssh_runner.go:195] Run: rm -f paused
	I1210 23:55:06.540119    5874 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:55:06.627543    5874 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d2djj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.633582    5874 pod_ready.go:94] pod "coredns-66bc5c9577-d2djj" is "Ready"
	I1210 23:55:06.633611    5874 pod_ready.go:86] duration metric: took 6.039113ms for pod "coredns-66bc5c9577-d2djj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.635958    5874 pod_ready.go:83] waiting for pod "etcd-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.640656    5874 pod_ready.go:94] pod "etcd-addons-903947" is "Ready"
	I1210 23:55:06.640688    5874 pod_ready.go:86] duration metric: took 4.707486ms for pod "etcd-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.642998    5874 pod_ready.go:83] waiting for pod "kube-apiserver-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.650127    5874 pod_ready.go:94] pod "kube-apiserver-addons-903947" is "Ready"
	I1210 23:55:06.650155    5874 pod_ready.go:86] duration metric: took 7.133548ms for pod "kube-apiserver-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.652791    5874 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.943811    5874 pod_ready.go:94] pod "kube-controller-manager-addons-903947" is "Ready"
	I1210 23:55:06.943840    5874 pod_ready.go:86] duration metric: took 291.01994ms for pod "kube-controller-manager-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:07.144622    5874 pod_ready.go:83] waiting for pod "kube-proxy-c2rd4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:07.544140    5874 pod_ready.go:94] pod "kube-proxy-c2rd4" is "Ready"
	I1210 23:55:07.544171    5874 pod_ready.go:86] duration metric: took 399.515197ms for pod "kube-proxy-c2rd4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:07.744697    5874 pod_ready.go:83] waiting for pod "kube-scheduler-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:08.144244    5874 pod_ready.go:94] pod "kube-scheduler-addons-903947" is "Ready"
	I1210 23:55:08.144276    5874 pod_ready.go:86] duration metric: took 399.544373ms for pod "kube-scheduler-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:08.144290    5874 pod_ready.go:40] duration metric: took 1.60414067s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:55:08.529010    5874 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 23:55:08.535663    5874 out.go:179] * Done! kubectl is now configured to use "addons-903947" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 23:58:06 addons-903947 crio[835]: time="2025-12-10T23:58:06.843497577Z" level=info msg="Removed container a64e561ccc7de18dda4f8f81e3c4e76500c38e7d42f38bf6b023ad9dc229579f: kube-system/registry-creds-764b6fb674-jkt4x/registry-creds" id=8deef49f-d953-446b-b9ed-59165a0d9cba name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.050929434Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-xvkw8/POD" id=6cc74cb6-8abf-499f-9e95-5f0b75b178f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.051013026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.073690082Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xvkw8 Namespace:default ID:9065ae4ca150cf68d3adc28cdc1acb9c6ee4c508cc552c92d9e659e637a19879 UID:ecc80f87-30e5-4280-8354-f2f40c0023ee NetNS:/var/run/netns/87344b63-1a56-4d66-9128-adc6f351a441 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079478}] Aliases:map[]}"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.073751758Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-xvkw8 to CNI network \"kindnet\" (type=ptp)"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.093448636Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xvkw8 Namespace:default ID:9065ae4ca150cf68d3adc28cdc1acb9c6ee4c508cc552c92d9e659e637a19879 UID:ecc80f87-30e5-4280-8354-f2f40c0023ee NetNS:/var/run/netns/87344b63-1a56-4d66-9128-adc6f351a441 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079478}] Aliases:map[]}"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.093778606Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-xvkw8 for CNI network kindnet (type=ptp)"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.109845024Z" level=info msg="Ran pod sandbox 9065ae4ca150cf68d3adc28cdc1acb9c6ee4c508cc552c92d9e659e637a19879 with infra container: default/hello-world-app-5d498dc89-xvkw8/POD" id=6cc74cb6-8abf-499f-9e95-5f0b75b178f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.123342371Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0b01321c-cd9b-4b71-a24f-f799939731c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.123675459Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=0b01321c-cd9b-4b71-a24f-f799939731c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.123822918Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=0b01321c-cd9b-4b71-a24f-f799939731c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.124783329Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6d11c361-607a-444c-b914-f6ec5a716df8 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.129798338Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.79698998Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=6d11c361-607a-444c-b914-f6ec5a716df8 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.797762179Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=82971e91-838e-499f-bd4d-b0a2b09d5476 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.801291103Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=12d793f2-4d9a-4c90-bc30-b15ca75a69c7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.814327342Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-xvkw8/hello-world-app" id=253193d2-f87f-4bd8-973e-98f8849c8043 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.814621849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.826159187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.82648736Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/10966646a7b708bbc722d97ea4e37efbd75750b06b2bc475d4b666e0ceab251d/merged/etc/passwd: no such file or directory"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.826578065Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/10966646a7b708bbc722d97ea4e37efbd75750b06b2bc475d4b666e0ceab251d/merged/etc/group: no such file or directory"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.826888006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.852967261Z" level=info msg="Created container 4297a5125b2a3dfcce4581e45d9a3ddbe5ab0ab61ccbf9bff3ad685a2124489a: default/hello-world-app-5d498dc89-xvkw8/hello-world-app" id=253193d2-f87f-4bd8-973e-98f8849c8043 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.858925878Z" level=info msg="Starting container: 4297a5125b2a3dfcce4581e45d9a3ddbe5ab0ab61ccbf9bff3ad685a2124489a" id=454c3ae0-17d1-4850-abb8-99c4abaf0fab name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:58:07 addons-903947 crio[835]: time="2025-12-10T23:58:07.863756974Z" level=info msg="Started container" PID=7060 containerID=4297a5125b2a3dfcce4581e45d9a3ddbe5ab0ab61ccbf9bff3ad685a2124489a description=default/hello-world-app-5d498dc89-xvkw8/hello-world-app id=454c3ae0-17d1-4850-abb8-99c4abaf0fab name=/runtime.v1.RuntimeService/StartContainer sandboxID=9065ae4ca150cf68d3adc28cdc1acb9c6ee4c508cc552c92d9e659e637a19879
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	4297a5125b2a3       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   9065ae4ca150c       hello-world-app-5d498dc89-xvkw8             default
	6ed9ec64fddd6       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             2 seconds ago            Exited              registry-creds                           1                   cfbc05dc38141       registry-creds-764b6fb674-jkt4x             kube-system
	dcf22216e1aff       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago            Running             nginx                                    0                   c377b3ef1e4b1       nginx                                       default
	f5cf7c2c09d28       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   071f723352513       busybox                                     default
	151c5e31ce828       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   16bf235c27ccc       ingress-nginx-controller-85d4c799dd-wgvkv   ingress-nginx
	943aa1912d4eb       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	3b5f3211aef97       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	994a8f897438c       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	5aabb9a953d20       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	e2cab701ed360       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   a58185ed1a522       gcp-auth-78565c9fb4-fnrhq                   gcp-auth
	b2837952c0dbd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   b8b49d007d61e       gadget-fqvh4                                gadget
	c4e4cea51bd36       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	d2f52d911b3ce       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   d65b4f139b68a       local-path-provisioner-648f6765c9-tcz2p     local-path-storage
	ab85dca695b3d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              patch                                    0                   3d9576a622130       ingress-nginx-admission-patch-7tj25         ingress-nginx
	c0a8cc7c2669f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   8d2ea723921b0       ingress-nginx-admission-create-7klqt        ingress-nginx
	8cb1a16ef86ba       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   47087c6174d0f       kube-ingress-dns-minikube                   kube-system
	a629933611fce       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   0629355522fed       registry-proxy-pnxjr                        kube-system
	025942b6fe499       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   e763fb924970e       nvidia-device-plugin-daemonset-mpzgr        kube-system
	f5c570a6481f2       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   9b829e44e9068       registry-6b586f9694-84lmh                   kube-system
	56a6cc123f59d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	9b51fb4b4cd2a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   23e51ff8331a7       csi-hostpath-attacher-0                     kube-system
	d16bf5857a0b5       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   6942e6be91b9f       snapshot-controller-7d9fbc56b8-4gxqm        kube-system
	740d4f99a5749       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   c7e8c0da85f1a       yakd-dashboard-5ff678cb9-t5k9j              yakd-dashboard
	be88179f8ab31       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   586c54945de3c       snapshot-controller-7d9fbc56b8-2r8cx        kube-system
	d84d9d1bef357       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   a7235df2c4b96       csi-hostpath-resizer-0                      kube-system
	15faddfa8e68f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   55d0b5df7b414       cloud-spanner-emulator-5bdddb765-brsbh      default
	245f22fe409d8       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   ea073d4f2cdb2       metrics-server-85b7d694d7-5hpfv             kube-system
	976bb3f5e7f34       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   ec0db5b7d9cbe       storage-provisioner                         kube-system
	09d359052fb27       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   9c15f169ffc33       coredns-66bc5c9577-d2djj                    kube-system
	98be1397391f4       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   aff6b7fc5fc3e       kube-proxy-c2rd4                            kube-system
	49bf5d15dca73       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   0ec0eae9188af       kindnet-mqqrh                               kube-system
	97d59cbad9439       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   acd5fac1b2212       kube-scheduler-addons-903947                kube-system
	ff91a260c6d64       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   717ba32271758       kube-apiserver-addons-903947                kube-system
	b2794babaa59b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   7c71e2cb1fe75       etcd-addons-903947                          kube-system
	479052386d5c3       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   f2b6d1dea9eed       kube-controller-manager-addons-903947       kube-system
	
	
	==> coredns [09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55] <==
	[INFO] 10.244.0.15:50876 - 59005 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00292025s
	[INFO] 10.244.0.15:50876 - 12029 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000190141s
	[INFO] 10.244.0.15:50876 - 39032 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000318349s
	[INFO] 10.244.0.15:60459 - 17450 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000147006s
	[INFO] 10.244.0.15:60459 - 17253 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169472s
	[INFO] 10.244.0.15:46137 - 10544 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116105s
	[INFO] 10.244.0.15:46137 - 10355 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011878s
	[INFO] 10.244.0.15:46589 - 62145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115933s
	[INFO] 10.244.0.15:46589 - 61701 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087715s
	[INFO] 10.244.0.15:37135 - 42406 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001741558s
	[INFO] 10.244.0.15:37135 - 42595 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001844626s
	[INFO] 10.244.0.15:46508 - 32453 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012084s
	[INFO] 10.244.0.15:46508 - 32041 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000199208s
	[INFO] 10.244.0.20:49219 - 58195 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000226951s
	[INFO] 10.244.0.20:60921 - 4921 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143988s
	[INFO] 10.244.0.20:33808 - 8961 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000256187s
	[INFO] 10.244.0.20:44811 - 39714 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000331889s
	[INFO] 10.244.0.20:33189 - 20515 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000220994s
	[INFO] 10.244.0.20:39497 - 30494 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000219419s
	[INFO] 10.244.0.20:36426 - 4120 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002246943s
	[INFO] 10.244.0.20:48882 - 36512 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00248123s
	[INFO] 10.244.0.20:42955 - 9144 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001977061s
	[INFO] 10.244.0.20:55999 - 48585 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002562061s
	[INFO] 10.244.0.23:46095 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00014947s
	[INFO] 10.244.0.23:44820 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000136193s
	
	
	==> describe nodes <==
	Name:               addons-903947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-903947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=addons-903947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_53_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-903947
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-903947"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-903947
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:58:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:57:53 +0000   Wed, 10 Dec 2025 23:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:57:53 +0000   Wed, 10 Dec 2025 23:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:57:53 +0000   Wed, 10 Dec 2025 23:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:57:53 +0000   Wed, 10 Dec 2025 23:53:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-903947
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                c7969298-bf03-4ca2-bd93-d9f79dc1e090
	  Boot ID:                    0edab61d-52b1-4525-85dd-848bc0b1d36e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-5bdddb765-brsbh       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     hello-world-app-5d498dc89-xvkw8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-fqvh4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-fnrhq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-wgvkv    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-d2djj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m56s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-4lrsf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 etcd-addons-903947                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m1s
	  kube-system                 kindnet-mqqrh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m57s
	  kube-system                 kube-apiserver-addons-903947                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-controller-manager-addons-903947        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-proxy-c2rd4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-addons-903947                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 metrics-server-85b7d694d7-5hpfv              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m50s
	  kube-system                 nvidia-device-plugin-daemonset-mpzgr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 registry-6b586f9694-84lmh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 registry-creds-764b6fb674-jkt4x              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-pnxjr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 snapshot-controller-7d9fbc56b8-2r8cx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-4gxqm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-tcz2p      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-t5k9j               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m56s  kube-proxy       
	  Normal   Starting                 5m1s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m1s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m1s   kubelet          Node addons-903947 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m1s   kubelet          Node addons-903947 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m1s   kubelet          Node addons-903947 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m57s  node-controller  Node addons-903947 event: Registered Node addons-903947 in Controller
	  Normal   NodeReady                4m15s  kubelet          Node addons-903947 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259] <==
	{"level":"warn","ts":"2025-12-10T23:53:03.180004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.192759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.248009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.284145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.296838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.339835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.374857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.390064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.431448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.460397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.499034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.534465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.556335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.610659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.634962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.694273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.720965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.761556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.926856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:19.511496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:19.530605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.702043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.717571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.745627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.765901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46420","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [e2cab701ed36062191e22a5d185507b2e4d51a30d9853eb7ceb9b08d9663ccf9] <==
	2025/12/10 23:54:41 GCP Auth Webhook started!
	2025/12/10 23:55:09 Ready to marshal response ...
	2025/12/10 23:55:09 Ready to write response ...
	2025/12/10 23:55:09 Ready to marshal response ...
	2025/12/10 23:55:09 Ready to write response ...
	2025/12/10 23:55:09 Ready to marshal response ...
	2025/12/10 23:55:09 Ready to write response ...
	2025/12/10 23:55:29 Ready to marshal response ...
	2025/12/10 23:55:29 Ready to write response ...
	2025/12/10 23:55:34 Ready to marshal response ...
	2025/12/10 23:55:34 Ready to write response ...
	2025/12/10 23:55:34 Ready to marshal response ...
	2025/12/10 23:55:34 Ready to write response ...
	2025/12/10 23:55:42 Ready to marshal response ...
	2025/12/10 23:55:42 Ready to write response ...
	2025/12/10 23:55:46 Ready to marshal response ...
	2025/12/10 23:55:46 Ready to write response ...
	2025/12/10 23:55:51 Ready to marshal response ...
	2025/12/10 23:55:51 Ready to write response ...
	2025/12/10 23:56:15 Ready to marshal response ...
	2025/12/10 23:56:15 Ready to write response ...
	2025/12/10 23:58:06 Ready to marshal response ...
	2025/12/10 23:58:06 Ready to write response ...
	
	
	==> kernel <==
	 23:58:09 up 9 min,  0 user,  load average: 0.97, 0.96, 0.54
	Linux addons-903947 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0] <==
	I1210 23:56:02.631667       1 main.go:301] handling current node
	I1210 23:56:12.632476       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:56:12.632534       1 main.go:301] handling current node
	I1210 23:56:22.634010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:56:22.634047       1 main.go:301] handling current node
	I1210 23:56:32.635895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:56:32.635929       1 main.go:301] handling current node
	I1210 23:56:42.640387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:56:42.640423       1 main.go:301] handling current node
	I1210 23:56:52.636761       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:56:52.636803       1 main.go:301] handling current node
	I1210 23:57:02.639545       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:57:02.639594       1 main.go:301] handling current node
	I1210 23:57:12.641303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:57:12.641451       1 main.go:301] handling current node
	I1210 23:57:22.635954       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:57:22.635990       1 main.go:301] handling current node
	I1210 23:57:32.641031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:57:32.641065       1 main.go:301] handling current node
	I1210 23:57:42.640375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:57:42.640406       1 main.go:301] handling current node
	I1210 23:57:52.637870       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:57:52.637905       1 main.go:301] handling current node
	I1210 23:58:02.635491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:58:02.635527       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c] <==
	W1210 23:53:41.717563       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 23:53:41.744941       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 23:53:41.765434       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 23:53:53.132908       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.9.76:443: connect: connection refused
	E1210 23:53:53.132960       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.9.76:443: connect: connection refused" logger="UnhandledError"
	W1210 23:53:53.143258       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.9.76:443: connect: connection refused
	E1210 23:53:53.143354       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.9.76:443: connect: connection refused" logger="UnhandledError"
	W1210 23:53:53.211175       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.9.76:443: connect: connection refused
	E1210 23:53:53.216794       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.9.76:443: connect: connection refused" logger="UnhandledError"
	W1210 23:54:00.811532       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 23:54:00.811608       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 23:54:00.812799       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	E1210 23:54:00.814271       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	E1210 23:54:00.819528       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	E1210 23:54:00.840848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	I1210 23:54:00.997991       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 23:55:19.077905       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38546: use of closed network connection
	E1210 23:55:19.216172       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38572: use of closed network connection
	I1210 23:55:46.183624       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 23:55:46.533736       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.134.64"}
	I1210 23:55:58.704707       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1210 23:58:06.977014       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.186.92"}
	
	
	==> kube-controller-manager [479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0] <==
	I1210 23:53:11.714726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 23:53:11.714537       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 23:53:11.714875       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:53:11.714895       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 23:53:11.714901       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 23:53:11.716095       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 23:53:11.717482       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:53:11.717571       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 23:53:11.720660       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 23:53:11.720749       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:53:11.720748       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 23:53:11.721277       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 23:53:11.726028       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 23:53:11.731388       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:53:11.735651       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 23:53:11.738253       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1210 23:53:17.983496       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 23:53:41.694772       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 23:53:41.694946       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 23:53:41.695016       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 23:53:41.731967       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 23:53:41.736496       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 23:53:41.795310       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:53:41.837199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:53:56.707326       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a] <==
	I1210 23:53:12.294755       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:53:12.383436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:53:12.491405       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:53:12.491438       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 23:53:12.491536       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:53:12.634235       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:53:12.634411       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:53:12.643637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:53:12.653659       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:53:12.653689       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:53:12.657748       1 config.go:200] "Starting service config controller"
	I1210 23:53:12.657791       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:53:12.675308       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:53:12.675329       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:53:12.675375       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:53:12.675380       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:53:12.676154       1 config.go:309] "Starting node config controller"
	I1210 23:53:12.676163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:53:12.676175       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:53:12.758338       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:53:12.776252       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 23:53:12.776252       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c] <==
	E1210 23:53:04.779541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 23:53:04.779597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 23:53:04.779646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 23:53:04.779694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 23:53:04.779767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 23:53:04.779837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:53:04.779884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 23:53:04.779928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:53:04.779966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 23:53:04.780011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 23:53:04.780056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 23:53:04.780103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 23:53:04.780146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 23:53:04.780194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:53:04.780237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 23:53:04.780337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 23:53:04.780394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 23:53:05.605056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:53:05.614499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:53:05.742024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 23:53:05.772549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 23:53:05.784791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 23:53:05.831363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 23:53:06.083192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 23:53:09.043719       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:56:24 addons-903947 kubelet[1298]: I1210 23:56:24.490934    1298 scope.go:117] "RemoveContainer" containerID="f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c"
	Dec 10 23:56:24 addons-903947 kubelet[1298]: E1210 23:56:24.491369    1298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c\": container with ID starting with f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c not found: ID does not exist" containerID="f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c"
	Dec 10 23:56:24 addons-903947 kubelet[1298]: I1210 23:56:24.491411    1298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c"} err="failed to get container status \"f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c\": rpc error: code = NotFound desc = could not find container \"f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c\": container with ID starting with f08061108e3c0dfd48ba0f5fedc285ca7b87dde6c8abd3cef08154b7491fb27c not found: ID does not exist"
	Dec 10 23:56:25 addons-903947 kubelet[1298]: I1210 23:56:25.445794    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82de149a-abc4-4187-afc8-81e6005968b1" path="/var/lib/kubelet/pods/82de149a-abc4-4187-afc8-81e6005968b1/volumes"
	Dec 10 23:56:32 addons-903947 kubelet[1298]: I1210 23:56:32.443098    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pnxjr" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:56:42 addons-903947 kubelet[1298]: I1210 23:56:42.443233    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mpzgr" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:56:44 addons-903947 kubelet[1298]: I1210 23:56:44.442916    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-84lmh" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:57:37 addons-903947 kubelet[1298]: I1210 23:57:37.445461    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pnxjr" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:57:54 addons-903947 kubelet[1298]: I1210 23:57:54.443594    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mpzgr" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:58:02 addons-903947 kubelet[1298]: I1210 23:58:02.442843    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-84lmh" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:58:04 addons-903947 kubelet[1298]: I1210 23:58:04.143394    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jkt4x" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:58:04 addons-903947 kubelet[1298]: W1210 23:58:04.171115    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/crio-cfbc05dc381415c450083b900c447eba1dd88079aa8b4d18f4862cf40dcbb5f0 WatchSource:0}: Error finding container cfbc05dc381415c450083b900c447eba1dd88079aa8b4d18f4862cf40dcbb5f0: Status 404 returned error can't find the container with id cfbc05dc381415c450083b900c447eba1dd88079aa8b4d18f4862cf40dcbb5f0
	Dec 10 23:58:05 addons-903947 kubelet[1298]: I1210 23:58:05.822947    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jkt4x" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:58:05 addons-903947 kubelet[1298]: I1210 23:58:05.823037    1298 scope.go:117] "RemoveContainer" containerID="a64e561ccc7de18dda4f8f81e3c4e76500c38e7d42f38bf6b023ad9dc229579f"
	Dec 10 23:58:06 addons-903947 kubelet[1298]: I1210 23:58:06.829730    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jkt4x" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:58:06 addons-903947 kubelet[1298]: I1210 23:58:06.829791    1298 scope.go:117] "RemoveContainer" containerID="6ed9ec64fddd6b7cdfb08d977ebd1d6d8ab9246315a0b2f7c6673d5cb58bdb7e"
	Dec 10 23:58:06 addons-903947 kubelet[1298]: E1210 23:58:06.829936    1298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-jkt4x_kube-system(b8da8abc-7964-4c10-95ce-1b6e0189c8c5)\"" pod="kube-system/registry-creds-764b6fb674-jkt4x" podUID="b8da8abc-7964-4c10-95ce-1b6e0189c8c5"
	Dec 10 23:58:06 addons-903947 kubelet[1298]: I1210 23:58:06.830552    1298 scope.go:117] "RemoveContainer" containerID="a64e561ccc7de18dda4f8f81e3c4e76500c38e7d42f38bf6b023ad9dc229579f"
	Dec 10 23:58:06 addons-903947 kubelet[1298]: I1210 23:58:06.843912    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hwm\" (UniqueName: \"kubernetes.io/projected/ecc80f87-30e5-4280-8354-f2f40c0023ee-kube-api-access-f5hwm\") pod \"hello-world-app-5d498dc89-xvkw8\" (UID: \"ecc80f87-30e5-4280-8354-f2f40c0023ee\") " pod="default/hello-world-app-5d498dc89-xvkw8"
	Dec 10 23:58:06 addons-903947 kubelet[1298]: I1210 23:58:06.843977    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ecc80f87-30e5-4280-8354-f2f40c0023ee-gcp-creds\") pod \"hello-world-app-5d498dc89-xvkw8\" (UID: \"ecc80f87-30e5-4280-8354-f2f40c0023ee\") " pod="default/hello-world-app-5d498dc89-xvkw8"
	Dec 10 23:58:07 addons-903947 kubelet[1298]: W1210 23:58:07.109285    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/crio-9065ae4ca150cf68d3adc28cdc1acb9c6ee4c508cc552c92d9e659e637a19879 WatchSource:0}: Error finding container 9065ae4ca150cf68d3adc28cdc1acb9c6ee4c508cc552c92d9e659e637a19879: Status 404 returned error can't find the container with id 9065ae4ca150cf68d3adc28cdc1acb9c6ee4c508cc552c92d9e659e637a19879
	Dec 10 23:58:07 addons-903947 kubelet[1298]: I1210 23:58:07.841045    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jkt4x" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 23:58:07 addons-903947 kubelet[1298]: I1210 23:58:07.841169    1298 scope.go:117] "RemoveContainer" containerID="6ed9ec64fddd6b7cdfb08d977ebd1d6d8ab9246315a0b2f7c6673d5cb58bdb7e"
	Dec 10 23:58:07 addons-903947 kubelet[1298]: E1210 23:58:07.841345    1298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-jkt4x_kube-system(b8da8abc-7964-4c10-95ce-1b6e0189c8c5)\"" pod="kube-system/registry-creds-764b6fb674-jkt4x" podUID="b8da8abc-7964-4c10-95ce-1b6e0189c8c5"
	Dec 10 23:58:08 addons-903947 kubelet[1298]: I1210 23:58:08.865548    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-xvkw8" podStartSLOduration=2.19115897 podStartE2EDuration="2.865531779s" podCreationTimestamp="2025-12-10 23:58:06 +0000 UTC" firstStartedPulling="2025-12-10 23:58:07.12420064 +0000 UTC m=+299.825744575" lastFinishedPulling="2025-12-10 23:58:07.798573458 +0000 UTC m=+300.500117384" observedRunningTime="2025-12-10 23:58:08.863832464 +0000 UTC m=+301.565376399" watchObservedRunningTime="2025-12-10 23:58:08.865531779 +0000 UTC m=+301.567075714"
	
	
	==> storage-provisioner [976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0] <==
	W1210 23:57:43.335578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:45.338886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:45.343555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:47.346548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:47.352938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:49.356464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:49.361166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:51.364532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:51.369318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:53.372029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:53.378383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:55.381354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:55.387943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:57.394734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:57.400259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:59.404019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:57:59.410926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:01.413836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:01.420654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:03.425069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:03.429734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:05.433150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:05.438916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:07.453425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:58:07.470743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-903947 -n addons-903947
helpers_test.go:270: (dbg) Run:  kubectl --context addons-903947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-903947 describe pod ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-903947 describe pod ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25: exit status 1 (91.164438ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7klqt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7tj25" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-903947 describe pod ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (266.609221ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:58:10.303754   15414 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:58:10.304004   15414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:58:10.304034   15414 out.go:374] Setting ErrFile to fd 2...
	I1210 23:58:10.304054   15414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:58:10.304330   15414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:58:10.304661   15414 mustload.go:66] Loading cluster: addons-903947
	I1210 23:58:10.305095   15414 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:58:10.305153   15414 addons.go:622] checking whether the cluster is paused
	I1210 23:58:10.305304   15414 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:58:10.305335   15414 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:58:10.305858   15414 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:58:10.326880   15414 ssh_runner.go:195] Run: systemctl --version
	I1210 23:58:10.326948   15414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:58:10.347105   15414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:58:10.454071   15414 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:58:10.454153   15414 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:58:10.487438   15414 cri.go:89] found id: "6ed9ec64fddd6b7cdfb08d977ebd1d6d8ab9246315a0b2f7c6673d5cb58bdb7e"
	I1210 23:58:10.487457   15414 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:58:10.487462   15414 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:58:10.487466   15414 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:58:10.487470   15414 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:58:10.487474   15414 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:58:10.487477   15414 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:58:10.487480   15414 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:58:10.487484   15414 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:58:10.487490   15414 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:58:10.487493   15414 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:58:10.487496   15414 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:58:10.487500   15414 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:58:10.487503   15414 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:58:10.487506   15414 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:58:10.487512   15414 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:58:10.487515   15414 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:58:10.487521   15414 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:58:10.487524   15414 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:58:10.487527   15414 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:58:10.487533   15414 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:58:10.487537   15414 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:58:10.487540   15414 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:58:10.487543   15414 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:58:10.487547   15414 cri.go:89] found id: ""
	I1210 23:58:10.487595   15414 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:58:10.503106   15414 out.go:203] 
	W1210 23:58:10.506148   15414 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:58:10.506175   15414 out.go:285] * 
	* 
	W1210 23:58:10.510478   15414 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:58:10.513351   15414 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable ingress --alsologtostderr -v=1: exit status 11 (268.1347ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:58:10.571731   15456 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:58:10.571950   15456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:58:10.571962   15456 out.go:374] Setting ErrFile to fd 2...
	I1210 23:58:10.571967   15456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:58:10.572239   15456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:58:10.572580   15456 mustload.go:66] Loading cluster: addons-903947
	I1210 23:58:10.573005   15456 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:58:10.573044   15456 addons.go:622] checking whether the cluster is paused
	I1210 23:58:10.573181   15456 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:58:10.573199   15456 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:58:10.573744   15456 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:58:10.592161   15456 ssh_runner.go:195] Run: systemctl --version
	I1210 23:58:10.592225   15456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:58:10.611732   15456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:58:10.721538   15456 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:58:10.721640   15456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:58:10.753815   15456 cri.go:89] found id: "6ed9ec64fddd6b7cdfb08d977ebd1d6d8ab9246315a0b2f7c6673d5cb58bdb7e"
	I1210 23:58:10.753834   15456 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:58:10.753839   15456 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:58:10.753843   15456 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:58:10.753847   15456 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:58:10.753851   15456 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:58:10.753854   15456 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:58:10.753857   15456 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:58:10.753860   15456 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:58:10.753874   15456 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:58:10.753878   15456 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:58:10.753881   15456 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:58:10.753884   15456 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:58:10.753887   15456 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:58:10.753890   15456 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:58:10.753895   15456 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:58:10.753898   15456 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:58:10.753903   15456 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:58:10.753906   15456 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:58:10.753909   15456 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:58:10.753914   15456 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:58:10.753918   15456 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:58:10.753925   15456 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:58:10.753929   15456 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:58:10.753932   15456 cri.go:89] found id: ""
	I1210 23:58:10.753993   15456 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:58:10.769860   15456 out.go:203] 
	W1210 23:58:10.772852   15456 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:58:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:58:10.772878   15456 out.go:285] * 
	* 
	W1210 23:58:10.777269   15456 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:58:10.780221   15456 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-fqvh4" [5f946695-c2be-410e-989a-e84731a11ea0] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004047824s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (261.635527ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:56:30.472440   14248 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:56:30.472654   14248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:30.472667   14248 out.go:374] Setting ErrFile to fd 2...
	I1210 23:56:30.472673   14248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:30.472902   14248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:56:30.473170   14248 mustload.go:66] Loading cluster: addons-903947
	I1210 23:56:30.473552   14248 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:30.473574   14248 addons.go:622] checking whether the cluster is paused
	I1210 23:56:30.473681   14248 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:30.473696   14248 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:56:30.474270   14248 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:56:30.492807   14248 ssh_runner.go:195] Run: systemctl --version
	I1210 23:56:30.492864   14248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:56:30.513450   14248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:56:30.617754   14248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:56:30.617869   14248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:56:30.652590   14248 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:56:30.652625   14248 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:56:30.652631   14248 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:56:30.652635   14248 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:56:30.652638   14248 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:56:30.652642   14248 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:56:30.652645   14248 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:56:30.652649   14248 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:56:30.652652   14248 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:56:30.652659   14248 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:56:30.652667   14248 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:56:30.652670   14248 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:56:30.652673   14248 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:56:30.652676   14248 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:56:30.652680   14248 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:56:30.652686   14248 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:56:30.652689   14248 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:56:30.652693   14248 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:56:30.652696   14248 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:56:30.652699   14248 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:56:30.652704   14248 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:56:30.652710   14248 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:56:30.652713   14248 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:56:30.652717   14248 cri.go:89] found id: ""
	I1210 23:56:30.652774   14248 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:56:30.668364   14248 out.go:203] 
	W1210 23:56:30.671254   14248 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:56:30.671277   14248 out.go:285] * 
	* 
	W1210 23:56:30.675633   14248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:56:30.678566   14248 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.728414ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003576859s
addons_test.go:465: (dbg) Run:  kubectl --context addons-903947 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (244.072143ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:45.676025   13247 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:45.676195   13247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:45.676204   13247 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:45.676208   13247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:45.676487   13247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:45.676809   13247 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:45.677230   13247 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:45.677253   13247 addons.go:622] checking whether the cluster is paused
	I1210 23:55:45.677367   13247 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:45.677383   13247 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:45.677965   13247 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:45.696290   13247 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:45.696341   13247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:45.713025   13247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:45.817630   13247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:45.817776   13247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:45.847197   13247 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:45.847237   13247 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:45.847243   13247 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:45.847248   13247 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:45.847252   13247 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:45.847255   13247 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:45.847259   13247 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:45.847262   13247 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:45.847265   13247 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:45.847272   13247 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:45.847279   13247 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:45.847282   13247 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:45.847285   13247 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:45.847288   13247 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:45.847292   13247 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:45.847297   13247 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:45.847303   13247 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:45.847307   13247 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:45.847310   13247 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:45.847313   13247 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:45.847318   13247 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:45.847324   13247 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:45.847327   13247 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:45.847330   13247 cri.go:89] found id: ""
	I1210 23:55:45.847389   13247 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:45.862535   13247 out.go:203] 
	W1210 23:55:45.864074   13247 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:45.864107   13247 out.go:285] * 
	* 
	W1210 23:55:45.868548   13247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:45.869854   13247 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 23:55:43.138368    4875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 23:55:43.144347    4875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 23:55:43.144374    4875 kapi.go:107] duration metric: took 6.018185ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.028737ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-903947 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-903947 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [1d401d54-8dcf-4b8a-b9a5-2dfc4b9d6d5e] Pending
helpers_test.go:353: "task-pv-pod" [1d401d54-8dcf-4b8a-b9a5-2dfc4b9d6d5e] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003411952s
addons_test.go:574: (dbg) Run:  kubectl --context addons-903947 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-903947 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-903947 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-903947 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-903947 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-903947 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-903947 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [82de149a-abc4-4187-afc8-81e6005968b1] Pending
helpers_test.go:353: "task-pv-pod-restore" [82de149a-abc4-4187-afc8-81e6005968b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [82de149a-abc4-4187-afc8-81e6005968b1] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003509413s
addons_test.go:616: (dbg) Run:  kubectl --context addons-903947 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-903947 delete pod task-pv-pod-restore: (1.17261817s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-903947 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-903947 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (271.545653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:56:24.935803   14142 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:56:24.936057   14142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:24.936093   14142 out.go:374] Setting ErrFile to fd 2...
	I1210 23:56:24.936115   14142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:24.936380   14142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:56:24.936677   14142 mustload.go:66] Loading cluster: addons-903947
	I1210 23:56:24.937090   14142 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:24.937140   14142 addons.go:622] checking whether the cluster is paused
	I1210 23:56:24.937273   14142 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:24.937311   14142 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:56:24.937859   14142 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:56:24.954401   14142 ssh_runner.go:195] Run: systemctl --version
	I1210 23:56:24.954466   14142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:56:24.980225   14142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:56:25.089953   14142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:56:25.090048   14142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:56:25.123150   14142 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:56:25.123173   14142 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:56:25.123178   14142 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:56:25.123183   14142 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:56:25.123186   14142 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:56:25.123190   14142 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:56:25.123194   14142 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:56:25.123198   14142 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:56:25.123201   14142 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:56:25.123209   14142 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:56:25.123212   14142 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:56:25.123216   14142 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:56:25.123219   14142 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:56:25.123222   14142 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:56:25.123226   14142 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:56:25.123235   14142 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:56:25.123240   14142 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:56:25.123245   14142 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:56:25.123258   14142 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:56:25.123262   14142 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:56:25.123267   14142 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:56:25.123273   14142 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:56:25.123276   14142 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:56:25.123280   14142 cri.go:89] found id: ""
	I1210 23:56:25.123336   14142 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:56:25.139179   14142 out.go:203] 
	W1210 23:56:25.142260   14142 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:56:25.142287   14142 out.go:285] * 
	* 
	W1210 23:56:25.146740   14142 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:56:25.149726   14142 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (258.402258ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:56:25.207825   14189 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:56:25.208033   14189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:25.208046   14189 out.go:374] Setting ErrFile to fd 2...
	I1210 23:56:25.208052   14189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:56:25.208349   14189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:56:25.208674   14189 mustload.go:66] Loading cluster: addons-903947
	I1210 23:56:25.209090   14189 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:25.209115   14189 addons.go:622] checking whether the cluster is paused
	I1210 23:56:25.209262   14189 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:56:25.209281   14189 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:56:25.209845   14189 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:56:25.229871   14189 ssh_runner.go:195] Run: systemctl --version
	I1210 23:56:25.229950   14189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:56:25.249261   14189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:56:25.353765   14189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:56:25.353864   14189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:56:25.382796   14189 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:56:25.382820   14189 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:56:25.382825   14189 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:56:25.382829   14189 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:56:25.382833   14189 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:56:25.382839   14189 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:56:25.382842   14189 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:56:25.382846   14189 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:56:25.382883   14189 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:56:25.382895   14189 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:56:25.382903   14189 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:56:25.382913   14189 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:56:25.382917   14189 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:56:25.382920   14189 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:56:25.382924   14189 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:56:25.382932   14189 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:56:25.382961   14189 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:56:25.383033   14189 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:56:25.383038   14189 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:56:25.383042   14189 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:56:25.383048   14189 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:56:25.383057   14189 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:56:25.383060   14189 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:56:25.383063   14189 cri.go:89] found id: ""
	I1210 23:56:25.383129   14189 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:56:25.398672   14189 out.go:203] 
	W1210 23:56:25.401537   14189 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:56:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:56:25.401561   14189 out.go:285] * 
	* 
	W1210 23:56:25.406035   14189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:56:25.409017   14189 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (42.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-903947 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-903947 --alsologtostderr -v=1: exit status 11 (259.738149ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:19.540864   12031 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:19.541118   12031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:19.541147   12031 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:19.541167   12031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:19.541470   12031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:19.541854   12031 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:19.542790   12031 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:19.542848   12031 addons.go:622] checking whether the cluster is paused
	I1210 23:55:19.543050   12031 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:19.543086   12031 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:19.543659   12031 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:19.561997   12031 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:19.562067   12031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:19.579031   12031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:19.685400   12031 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:19.685519   12031 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:19.713187   12031 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:19.713209   12031 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:19.713226   12031 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:19.713234   12031 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:19.713238   12031 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:19.713242   12031 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:19.713245   12031 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:19.713248   12031 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:19.713252   12031 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:19.713258   12031 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:19.713264   12031 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:19.713267   12031 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:19.713271   12031 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:19.713274   12031 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:19.713278   12031 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:19.713283   12031 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:19.713288   12031 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:19.713292   12031 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:19.713295   12031 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:19.713298   12031 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:19.713303   12031 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:19.713307   12031 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:19.713310   12031 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:19.713315   12031 cri.go:89] found id: ""
	I1210 23:55:19.713370   12031 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:19.727712   12031 out.go:203] 
	W1210 23:55:19.730524   12031 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:19.730555   12031 out.go:285] * 
	* 
	W1210 23:55:19.734816   12031 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:19.737674   12031 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-903947 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-903947
helpers_test.go:244: (dbg) docker inspect addons-903947:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b",
	        "Created": "2025-12-10T23:52:43.809539023Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 6270,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T23:52:43.874839573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/hosts",
	        "LogPath": "/var/lib/docker/containers/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b/2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b-json.log",
	        "Name": "/addons-903947",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-903947:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-903947",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2f5b93e82992753e3bbcce9791aa16480266b4f52796b4d5560e7ec8080aa86b",
	                "LowerDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c0ca5a93a512b18c04ad0aa77c6e70c6c062bdd67225ac9e06f98498bc3aea4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-903947",
	                "Source": "/var/lib/docker/volumes/addons-903947/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-903947",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-903947",
	                "name.minikube.sigs.k8s.io": "addons-903947",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "958601d874b510856d61987408247d7176bcb83ca551675ae37eecc1197cff2c",
	            "SandboxKey": "/var/run/docker/netns/958601d874b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-903947": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:9b:6c:a9:93:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3cd589a87a23a69daa73698c71df3ec112b465d3a6d200d824f818ffb9afcf6a",
	                    "EndpointID": "660a3415a965e159c03e3332115143bcb326eb7a16dccb97694d8a88b14d043d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-903947",
	                        "2f5b93e82992"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-903947 -n addons-903947
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-903947 logs -n 25: (1.407020599s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-838700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-838700   │ jenkins │ v1.37.0 │ 10 Dec 25 23:50 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ delete  │ -p download-only-838700                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-838700   │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ start   │ -o=json --download-only -p download-only-887652 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-887652   │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ delete  │ -p download-only-887652                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-887652   │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ start   │ -o=json --download-only -p download-only-669413 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-669413   │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ delete  │ -p download-only-669413                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-669413   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ delete  │ -p download-only-838700                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-838700   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ delete  │ -p download-only-887652                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-887652   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ delete  │ -p download-only-669413                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-669413   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ start   │ --download-only -p download-docker-685635 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-685635 │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │                     │
	│ delete  │ -p download-docker-685635                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-685635 │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ start   │ --download-only -p binary-mirror-844379 --alsologtostderr --binary-mirror http://127.0.0.1:33593 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-844379   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │                     │
	│ delete  │ -p binary-mirror-844379                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-844379   │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:52 UTC │
	│ addons  │ enable dashboard -p addons-903947                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │                     │
	│ addons  │ disable dashboard -p addons-903947                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │                     │
	│ start   │ -p addons-903947 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:52 UTC │ 10 Dec 25 23:55 UTC │
	│ addons  │ addons-903947 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ addons-903947 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	│ addons  │ enable headlamp -p addons-903947 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-903947          │ jenkins │ v1.37.0 │ 10 Dec 25 23:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:52:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:52:19.171327    5874 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:52:19.171555    5874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:52:19.171566    5874 out.go:374] Setting ErrFile to fd 2...
	I1210 23:52:19.171571    5874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:52:19.171874    5874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:52:19.172385    5874 out.go:368] Setting JSON to false
	I1210 23:52:19.173165    5874 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":226,"bootTime":1765410514,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 23:52:19.173233    5874 start.go:143] virtualization:  
	I1210 23:52:19.176675    5874 out.go:179] * [addons-903947] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 23:52:19.180376    5874 out.go:179]   - MINIKUBE_LOCATION=22061
	I1210 23:52:19.180448    5874 notify.go:221] Checking for updates...
	I1210 23:52:19.186181    5874 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:52:19.189115    5874 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1210 23:52:19.192049    5874 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1210 23:52:19.194924    5874 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 23:52:19.197881    5874 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 23:52:19.201064    5874 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:52:19.223142    5874 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 23:52:19.223254    5874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:52:19.285912    5874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:52:19.276692644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:52:19.286014    5874 docker.go:319] overlay module found
	I1210 23:52:19.289042    5874 out.go:179] * Using the docker driver based on user configuration
	I1210 23:52:19.291816    5874 start.go:309] selected driver: docker
	I1210 23:52:19.291834    5874 start.go:927] validating driver "docker" against <nil>
	I1210 23:52:19.291857    5874 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 23:52:19.292626    5874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:52:19.350419    5874 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:52:19.341133695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:52:19.350568    5874 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:52:19.350811    5874 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:52:19.353825    5874 out.go:179] * Using Docker driver with root privileges
	I1210 23:52:19.356612    5874 cni.go:84] Creating CNI manager for ""
	I1210 23:52:19.356677    5874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:52:19.356698    5874 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:52:19.356775    5874 start.go:353] cluster config:
	{Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:52:19.359940    5874 out.go:179] * Starting "addons-903947" primary control-plane node in "addons-903947" cluster
	I1210 23:52:19.362842    5874 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:52:19.365756    5874 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:52:19.368532    5874 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:52:19.368576    5874 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1210 23:52:19.368588    5874 cache.go:65] Caching tarball of preloaded images
	I1210 23:52:19.368621    5874 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:52:19.368670    5874 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 23:52:19.368681    5874 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:52:19.369019    5874 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/config.json ...
	I1210 23:52:19.369047    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/config.json: {Name:mk735da483e0335fcfbe279682e15fa9f8c8dbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:19.384761    5874 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 23:52:19.384876    5874 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 23:52:19.384895    5874 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 23:52:19.384899    5874 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 23:52:19.384907    5874 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 23:52:19.384911    5874 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1210 23:52:37.081561    5874 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1210 23:52:37.081597    5874 cache.go:243] Successfully downloaded all kic artifacts
	I1210 23:52:37.081636    5874 start.go:360] acquireMachinesLock for addons-903947: {Name:mk0f48a093bb9740038890b789e7cac9483bde49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 23:52:37.081764    5874 start.go:364] duration metric: took 105.622µs to acquireMachinesLock for "addons-903947"
	I1210 23:52:37.081789    5874 start.go:93] Provisioning new machine with config: &{Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:52:37.081865    5874 start.go:125] createHost starting for "" (driver="docker")
	I1210 23:52:37.085242    5874 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 23:52:37.085477    5874 start.go:159] libmachine.API.Create for "addons-903947" (driver="docker")
	I1210 23:52:37.085510    5874 client.go:173] LocalClient.Create starting
	I1210 23:52:37.085621    5874 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem
	I1210 23:52:37.137587    5874 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem
	I1210 23:52:37.237801    5874 cli_runner.go:164] Run: docker network inspect addons-903947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 23:52:37.253431    5874 cli_runner.go:211] docker network inspect addons-903947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 23:52:37.253528    5874 network_create.go:284] running [docker network inspect addons-903947] to gather additional debugging logs...
	I1210 23:52:37.253552    5874 cli_runner.go:164] Run: docker network inspect addons-903947
	W1210 23:52:37.269123    5874 cli_runner.go:211] docker network inspect addons-903947 returned with exit code 1
	I1210 23:52:37.269153    5874 network_create.go:287] error running [docker network inspect addons-903947]: docker network inspect addons-903947: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-903947 not found
	I1210 23:52:37.269167    5874 network_create.go:289] output of [docker network inspect addons-903947]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-903947 not found
	
	** /stderr **
	I1210 23:52:37.269262    5874 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:52:37.286360    5874 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30c90}
	I1210 23:52:37.286403    5874 network_create.go:124] attempt to create docker network addons-903947 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 23:52:37.286461    5874 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-903947 addons-903947
	I1210 23:52:37.347995    5874 network_create.go:108] docker network addons-903947 192.168.49.0/24 created
	I1210 23:52:37.348035    5874 kic.go:121] calculated static IP "192.168.49.2" for the "addons-903947" container
	I1210 23:52:37.348108    5874 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 23:52:37.363340    5874 cli_runner.go:164] Run: docker volume create addons-903947 --label name.minikube.sigs.k8s.io=addons-903947 --label created_by.minikube.sigs.k8s.io=true
	I1210 23:52:37.380912    5874 oci.go:103] Successfully created a docker volume addons-903947
	I1210 23:52:37.380993    5874 cli_runner.go:164] Run: docker run --rm --name addons-903947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-903947 --entrypoint /usr/bin/test -v addons-903947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 23:52:39.781978    5874 cli_runner.go:217] Completed: docker run --rm --name addons-903947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-903947 --entrypoint /usr/bin/test -v addons-903947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (2.400944771s)
	I1210 23:52:39.782025    5874 oci.go:107] Successfully prepared a docker volume addons-903947
	I1210 23:52:39.782078    5874 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:52:39.782094    5874 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 23:52:39.782158    5874 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-903947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 23:52:43.750235    5874 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-903947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.968039797s)
	I1210 23:52:43.750266    5874 kic.go:203] duration metric: took 3.968169609s to extract preloaded images to volume ...
	W1210 23:52:43.750418    5874 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 23:52:43.750531    5874 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 23:52:43.795767    5874 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-903947 --name addons-903947 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-903947 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-903947 --network addons-903947 --ip 192.168.49.2 --volume addons-903947:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 23:52:44.100768    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Running}}
	I1210 23:52:44.123621    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:52:44.150902    5874 cli_runner.go:164] Run: docker exec addons-903947 stat /var/lib/dpkg/alternatives/iptables
	I1210 23:52:44.216972    5874 oci.go:144] the created container "addons-903947" has a running status.
	I1210 23:52:44.216999    5874 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa...
	I1210 23:52:44.713849    5874 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 23:52:44.743012    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:52:44.777439    5874 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 23:52:44.777460    5874 kic_runner.go:114] Args: [docker exec --privileged addons-903947 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 23:52:44.847314    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:52:44.869522    5874 machine.go:94] provisionDockerMachine start ...
	I1210 23:52:44.869632    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:44.890497    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:44.890904    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:44.890918    5874 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 23:52:45.131580    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-903947
	
	I1210 23:52:45.131608    5874 ubuntu.go:182] provisioning hostname "addons-903947"
	I1210 23:52:45.131696    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:45.170220    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:45.170571    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:45.170583    5874 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-903947 && echo "addons-903947" | sudo tee /etc/hostname
	I1210 23:52:45.380657    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-903947
	
	I1210 23:52:45.380810    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:45.400919    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:45.401221    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:45.401235    5874 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-903947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-903947/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-903947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 23:52:45.559167    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 23:52:45.559192    5874 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1210 23:52:45.559210    5874 ubuntu.go:190] setting up certificates
	I1210 23:52:45.559219    5874 provision.go:84] configureAuth start
	I1210 23:52:45.559280    5874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-903947
	I1210 23:52:45.578029    5874 provision.go:143] copyHostCerts
	I1210 23:52:45.578114    5874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1210 23:52:45.578244    5874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1210 23:52:45.578313    5874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1210 23:52:45.578373    5874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.addons-903947 san=[127.0.0.1 192.168.49.2 addons-903947 localhost minikube]
	I1210 23:52:45.916699    5874 provision.go:177] copyRemoteCerts
	I1210 23:52:45.916763    5874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 23:52:45.916816    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:45.933795    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.039308    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 23:52:46.057595    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 23:52:46.075589    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1210 23:52:46.096027    5874 provision.go:87] duration metric: took 536.784465ms to configureAuth
	I1210 23:52:46.096060    5874 ubuntu.go:206] setting minikube options for container-runtime
	I1210 23:52:46.096257    5874 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:52:46.096370    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.113931    5874 main.go:143] libmachine: Using SSH client type: native
	I1210 23:52:46.114248    5874 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1210 23:52:46.114271    5874 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 23:52:46.422480    5874 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 23:52:46.422547    5874 machine.go:97] duration metric: took 1.553005215s to provisionDockerMachine
	I1210 23:52:46.422576    5874 client.go:176] duration metric: took 9.337058875s to LocalClient.Create
	I1210 23:52:46.422602    5874 start.go:167] duration metric: took 9.33712369s to libmachine.API.Create "addons-903947"
	I1210 23:52:46.422646    5874 start.go:293] postStartSetup for "addons-903947" (driver="docker")
	I1210 23:52:46.422671    5874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 23:52:46.422783    5874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 23:52:46.422893    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.440488    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.542785    5874 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 23:52:46.545915    5874 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 23:52:46.545945    5874 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 23:52:46.545957    5874 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1210 23:52:46.546025    5874 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1210 23:52:46.546054    5874 start.go:296] duration metric: took 123.389166ms for postStartSetup
	I1210 23:52:46.546371    5874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-903947
	I1210 23:52:46.563079    5874 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/config.json ...
	I1210 23:52:46.563364    5874 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 23:52:46.563424    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.579790    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.684062    5874 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 23:52:46.688803    5874 start.go:128] duration metric: took 9.606923824s to createHost
	I1210 23:52:46.688831    5874 start.go:83] releasing machines lock for "addons-903947", held for 9.607057722s
	I1210 23:52:46.688925    5874 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-903947
	I1210 23:52:46.706447    5874 ssh_runner.go:195] Run: cat /version.json
	I1210 23:52:46.706499    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.706510    5874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 23:52:46.706578    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:52:46.730523    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.744965    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:52:46.932909    5874 ssh_runner.go:195] Run: systemctl --version
	I1210 23:52:46.940326    5874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 23:52:46.976860    5874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 23:52:46.981131    5874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 23:52:46.981229    5874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 23:52:47.017089    5874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 23:52:47.017155    5874 start.go:496] detecting cgroup driver to use...
	I1210 23:52:47.017197    5874 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 23:52:47.017259    5874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 23:52:47.035334    5874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 23:52:47.048494    5874 docker.go:218] disabling cri-docker service (if available) ...
	I1210 23:52:47.048592    5874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 23:52:47.066571    5874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 23:52:47.085791    5874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 23:52:47.202792    5874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 23:52:47.329399    5874 docker.go:234] disabling docker service ...
	I1210 23:52:47.329463    5874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 23:52:47.350434    5874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 23:52:47.363054    5874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 23:52:47.491319    5874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 23:52:47.609495    5874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 23:52:47.621991    5874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 23:52:47.636080    5874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 23:52:47.636192    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.644922    5874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 23:52:47.645046    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.653723    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.662629    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.672291    5874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 23:52:47.680508    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.689319    5874 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.702465    5874 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 23:52:47.711280    5874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 23:52:47.718542    5874 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 23:52:47.718607    5874 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 23:52:47.732522    5874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 23:52:47.739715    5874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:52:47.849931    5874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 23:52:48.011832    5874 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 23:52:48.011943    5874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 23:52:48.016994    5874 start.go:564] Will wait 60s for crictl version
	I1210 23:52:48.017064    5874 ssh_runner.go:195] Run: which crictl
	I1210 23:52:48.021861    5874 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 23:52:48.059679    5874 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 23:52:48.059815    5874 ssh_runner.go:195] Run: crio --version
	I1210 23:52:48.087926    5874 ssh_runner.go:195] Run: crio --version
	I1210 23:52:48.124319    5874 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 23:52:48.127263    5874 cli_runner.go:164] Run: docker network inspect addons-903947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 23:52:48.145181    5874 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 23:52:48.149265    5874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:52:48.159430    5874 kubeadm.go:884] updating cluster {Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 23:52:48.159555    5874 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:52:48.159625    5874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:52:48.195766    5874 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:52:48.195792    5874 crio.go:433] Images already preloaded, skipping extraction
	I1210 23:52:48.195853    5874 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 23:52:48.221110    5874 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 23:52:48.221135    5874 cache_images.go:86] Images are preloaded, skipping loading
	I1210 23:52:48.221145    5874 kubeadm.go:935] updating node { 192.168.49.2  8443 v1.34.2 crio true true} ...
	I1210 23:52:48.221280    5874 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-903947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 23:52:48.221381    5874 ssh_runner.go:195] Run: crio config
	I1210 23:52:48.305414    5874 cni.go:84] Creating CNI manager for ""
	I1210 23:52:48.305439    5874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:52:48.305485    5874 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 23:52:48.305516    5874 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-903947 NodeName:addons-903947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 23:52:48.305661    5874 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-903947"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 23:52:48.305738    5874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 23:52:48.313957    5874 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 23:52:48.314056    5874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 23:52:48.322443    5874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 23:52:48.336230    5874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 23:52:48.349795    5874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1210 23:52:48.363638    5874 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 23:52:48.367813    5874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 23:52:48.377828    5874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:52:48.488017    5874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:52:48.504798    5874 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947 for IP: 192.168.49.2
	I1210 23:52:48.504869    5874 certs.go:195] generating shared ca certs ...
	I1210 23:52:48.504902    5874 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:48.505103    5874 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1210 23:52:48.782161    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt ...
	I1210 23:52:48.782194    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt: {Name:mk00facc681767994d91bec52ecd40e1bc33b2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:48.782386    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key ...
	I1210 23:52:48.782398    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key: {Name:mk64669b159fea61000d44e52eed549edb0ea9c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:48.782485    5874 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1210 23:52:49.040690    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt ...
	I1210 23:52:49.040722    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt: {Name:mkac3ad3424e7224ff419ab1f6473c38ac84d334 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.040903    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key ...
	I1210 23:52:49.040915    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key: {Name:mk5e394e666c47a3926fb73141c0949c6c354e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.041006    5874 certs.go:257] generating profile certs ...
	I1210 23:52:49.041071    5874 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.key
	I1210 23:52:49.041090    5874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt with IP's: []
	I1210 23:52:49.436571    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt ...
	I1210 23:52:49.436603    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: {Name:mk2d5db386af54c15b534b90c077692db22543e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.436790    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.key ...
	I1210 23:52:49.436805    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.key: {Name:mk062f15586ea5f3e20f9e75be1c031b4c125750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.436893    5874 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf
	I1210 23:52:49.436916    5874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 23:52:49.545047    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf ...
	I1210 23:52:49.545079    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf: {Name:mka148db7dac5b0b29e61829b4c3b03dc742039b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.545259    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf ...
	I1210 23:52:49.545273    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf: {Name:mkcbeee4881c4e4210e1bf5857f5e5e22b5464b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.545362    5874 certs.go:382] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt.9ef3fcaf -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt
	I1210 23:52:49.545449    5874 certs.go:386] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key.9ef3fcaf -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key
	I1210 23:52:49.545504    5874 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key
	I1210 23:52:49.545525    5874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt with IP's: []
	I1210 23:52:49.872237    5874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt ...
	I1210 23:52:49.872269    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt: {Name:mk4ab41a5dc3ab83e488b3a97ff6d97cb4b3baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.872444    5874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key ...
	I1210 23:52:49.872457    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key: {Name:mk30743ce08c26a9ec595be5daa0df8f1c130c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:52:49.872647    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1210 23:52:49.872691    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1210 23:52:49.872716    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1210 23:52:49.872747    5874 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1210 23:52:49.873302    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 23:52:49.891544    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 23:52:49.909669    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 23:52:49.928132    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1210 23:52:49.945094    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 23:52:49.961677    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 23:52:49.978650    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 23:52:49.995594    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 23:52:50.022483    5874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 23:52:50.042350    5874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 23:52:50.056207    5874 ssh_runner.go:195] Run: openssl version
	I1210 23:52:50.062643    5874 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.070748    5874 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 23:52:50.078476    5874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.082396    5874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.082467    5874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 23:52:50.125465    5874 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 23:52:50.132999    5874 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 23:52:50.140314    5874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 23:52:50.143908    5874 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 23:52:50.143957    5874 kubeadm.go:401] StartCluster: {Name:addons-903947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-903947 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:52:50.144043    5874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:52:50.144103    5874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:52:50.185455    5874 cri.go:89] found id: ""
	I1210 23:52:50.185577    5874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 23:52:50.195454    5874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 23:52:50.204154    5874 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 23:52:50.204220    5874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 23:52:50.213136    5874 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 23:52:50.213155    5874 kubeadm.go:158] found existing configuration files:
	
	I1210 23:52:50.213206    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 23:52:50.221013    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 23:52:50.221083    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 23:52:50.228288    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 23:52:50.235782    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 23:52:50.235870    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 23:52:50.243155    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 23:52:50.250940    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 23:52:50.251021    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 23:52:50.258109    5874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 23:52:50.265354    5874 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 23:52:50.265466    5874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 23:52:50.272852    5874 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 23:52:50.316765    5874 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 23:52:50.317132    5874 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 23:52:50.343525    5874 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 23:52:50.343709    5874 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 23:52:50.343781    5874 kubeadm.go:319] OS: Linux
	I1210 23:52:50.343859    5874 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 23:52:50.343937    5874 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 23:52:50.344014    5874 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 23:52:50.344093    5874 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 23:52:50.344170    5874 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 23:52:50.344249    5874 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 23:52:50.344326    5874 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 23:52:50.344402    5874 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 23:52:50.344481    5874 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 23:52:50.407506    5874 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 23:52:50.407654    5874 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 23:52:50.407750    5874 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 23:52:50.416946    5874 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 23:52:50.423179    5874 out.go:252]   - Generating certificates and keys ...
	I1210 23:52:50.423281    5874 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 23:52:50.423359    5874 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 23:52:50.531787    5874 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 23:52:51.285198    5874 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 23:52:51.541175    5874 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 23:52:52.249793    5874 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 23:52:54.278419    5874 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 23:52:54.278561    5874 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-903947 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 23:52:55.481575    5874 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 23:52:55.481934    5874 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-903947 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 23:52:56.164698    5874 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 23:52:56.491289    5874 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 23:52:56.727823    5874 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 23:52:56.728287    5874 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 23:52:56.969966    5874 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 23:52:57.577566    5874 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 23:52:58.158250    5874 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 23:52:59.244577    5874 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 23:52:59.464697    5874 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 23:52:59.465275    5874 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 23:52:59.467870    5874 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 23:52:59.471335    5874 out.go:252]   - Booting up control plane ...
	I1210 23:52:59.471433    5874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 23:52:59.471510    5874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 23:52:59.471577    5874 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 23:52:59.486123    5874 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 23:52:59.486435    5874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 23:52:59.496279    5874 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 23:52:59.496679    5874 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 23:52:59.496884    5874 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 23:52:59.629115    5874 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 23:52:59.629234    5874 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 23:53:00.631315    5874 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00123898s
	I1210 23:53:00.633901    5874 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 23:53:00.633992    5874 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 23:53:00.634082    5874 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 23:53:00.634169    5874 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 23:53:04.138739    5874 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.504313587s
	I1210 23:53:04.763078    5874 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.129092459s
	I1210 23:53:06.635865    5874 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001720356s
	I1210 23:53:06.670334    5874 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 23:53:06.689297    5874 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 23:53:06.707629    5874 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 23:53:06.707851    5874 kubeadm.go:319] [mark-control-plane] Marking the node addons-903947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 23:53:06.724791    5874 kubeadm.go:319] [bootstrap-token] Using token: uj2agy.orhyfdtqxpvj5c65
	I1210 23:53:06.727780    5874 out.go:252]   - Configuring RBAC rules ...
	I1210 23:53:06.727920    5874 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 23:53:06.742259    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 23:53:06.751958    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 23:53:06.756255    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 23:53:06.762800    5874 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 23:53:06.769210    5874 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 23:53:07.042944    5874 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 23:53:07.509699    5874 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 23:53:08.042580    5874 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 23:53:08.043776    5874 kubeadm.go:319] 
	I1210 23:53:08.043860    5874 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 23:53:08.043870    5874 kubeadm.go:319] 
	I1210 23:53:08.043947    5874 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 23:53:08.043963    5874 kubeadm.go:319] 
	I1210 23:53:08.043989    5874 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 23:53:08.044055    5874 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 23:53:08.044112    5874 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 23:53:08.044117    5874 kubeadm.go:319] 
	I1210 23:53:08.044171    5874 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 23:53:08.044178    5874 kubeadm.go:319] 
	I1210 23:53:08.044226    5874 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 23:53:08.044234    5874 kubeadm.go:319] 
	I1210 23:53:08.044286    5874 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 23:53:08.044363    5874 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 23:53:08.044435    5874 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 23:53:08.044443    5874 kubeadm.go:319] 
	I1210 23:53:08.044528    5874 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 23:53:08.044608    5874 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 23:53:08.044616    5874 kubeadm.go:319] 
	I1210 23:53:08.044700    5874 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uj2agy.orhyfdtqxpvj5c65 \
	I1210 23:53:08.044808    5874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:695d64cb3b1088e1978dc6911e99e852648cf50ad98520ca6a673e7aef325366 \
	I1210 23:53:08.044832    5874 kubeadm.go:319] 	--control-plane 
	I1210 23:53:08.044839    5874 kubeadm.go:319] 
	I1210 23:53:08.044924    5874 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 23:53:08.044932    5874 kubeadm.go:319] 
	I1210 23:53:08.045014    5874 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uj2agy.orhyfdtqxpvj5c65 \
	I1210 23:53:08.045120    5874 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:695d64cb3b1088e1978dc6911e99e852648cf50ad98520ca6a673e7aef325366 
	I1210 23:53:08.049306    5874 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 23:53:08.049521    5874 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 23:53:08.049625    5874 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 23:53:08.049654    5874 cni.go:84] Creating CNI manager for ""
	I1210 23:53:08.049666    5874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:53:08.052819    5874 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 23:53:08.055756    5874 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 23:53:08.060529    5874 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 23:53:08.060550    5874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 23:53:08.075357    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 23:53:08.360154    5874 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 23:53:08.360292    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:08.360364    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-903947 minikube.k8s.io/updated_at=2025_12_10T23_53_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6 minikube.k8s.io/name=addons-903947 minikube.k8s.io/primary=true
	I1210 23:53:08.506428    5874 ops.go:34] apiserver oom_adj: -16
	I1210 23:53:08.506539    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:09.007935    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:09.506696    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:10.007501    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:10.506700    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:11.006722    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:11.506729    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:12.010941    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:12.506998    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:13.008184    5874 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 23:53:13.097545    5874 kubeadm.go:1114] duration metric: took 4.737294928s to wait for elevateKubeSystemPrivileges
	I1210 23:53:13.097576    5874 kubeadm.go:403] duration metric: took 22.953620046s to StartCluster
	I1210 23:53:13.097593    5874 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:53:13.097699    5874 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1210 23:53:13.098118    5874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:53:13.098320    5874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 23:53:13.098328    5874 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 23:53:13.098592    5874 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:53:13.098635    5874 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 23:53:13.098722    5874 addons.go:70] Setting yakd=true in profile "addons-903947"
	I1210 23:53:13.098735    5874 addons.go:239] Setting addon yakd=true in "addons-903947"
	I1210 23:53:13.098756    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.099262    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.099399    5874 addons.go:70] Setting inspektor-gadget=true in profile "addons-903947"
	I1210 23:53:13.099418    5874 addons.go:239] Setting addon inspektor-gadget=true in "addons-903947"
	I1210 23:53:13.099448    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.099880    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.100352    5874 addons.go:70] Setting metrics-server=true in profile "addons-903947"
	I1210 23:53:13.100380    5874 addons.go:239] Setting addon metrics-server=true in "addons-903947"
	I1210 23:53:13.100439    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.100880    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.103123    5874 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-903947"
	I1210 23:53:13.103155    5874 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-903947"
	I1210 23:53:13.103185    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.103676    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.106876    5874 addons.go:70] Setting registry=true in profile "addons-903947"
	I1210 23:53:13.106903    5874 addons.go:239] Setting addon registry=true in "addons-903947"
	I1210 23:53:13.106941    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.107416    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.110108    5874 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-903947"
	I1210 23:53:13.110188    5874 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-903947"
	I1210 23:53:13.110250    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.110785    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.122380    5874 addons.go:70] Setting cloud-spanner=true in profile "addons-903947"
	I1210 23:53:13.122463    5874 addons.go:239] Setting addon cloud-spanner=true in "addons-903947"
	I1210 23:53:13.122511    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.123155    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.130658    5874 out.go:179] * Verifying Kubernetes components...
	I1210 23:53:13.140161    5874 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-903947"
	I1210 23:53:13.140509    5874 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-903947"
	I1210 23:53:13.142287    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.144086    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140348    5874 addons.go:70] Setting default-storageclass=true in profile "addons-903947"
	I1210 23:53:13.144420    5874 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-903947"
	I1210 23:53:13.151533    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.122399    5874 addons.go:70] Setting storage-provisioner=true in profile "addons-903947"
	I1210 23:53:13.154140    5874 addons.go:239] Setting addon storage-provisioner=true in "addons-903947"
	I1210 23:53:13.154231    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.154901    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.159075    5874 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 23:53:13.122408    5874 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-903947"
	I1210 23:53:13.159399    5874 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-903947"
	I1210 23:53:13.159727    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.172593    5874 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 23:53:13.172666    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 23:53:13.172770    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.122420    5874 addons.go:70] Setting volcano=true in profile "addons-903947"
	I1210 23:53:13.177779    5874 addons.go:239] Setting addon volcano=true in "addons-903947"
	I1210 23:53:13.177826    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.178281    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.122427    5874 addons.go:70] Setting volumesnapshots=true in profile "addons-903947"
	I1210 23:53:13.188463    5874 addons.go:239] Setting addon volumesnapshots=true in "addons-903947"
	I1210 23:53:13.188596    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.197937    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140357    5874 addons.go:70] Setting gcp-auth=true in profile "addons-903947"
	I1210 23:53:13.219082    5874 mustload.go:66] Loading cluster: addons-903947
	I1210 23:53:13.219321    5874 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:53:13.219607    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140361    5874 addons.go:70] Setting ingress=true in profile "addons-903947"
	I1210 23:53:13.234736    5874 addons.go:239] Setting addon ingress=true in "addons-903947"
	I1210 23:53:13.234792    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.235579    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.140364    5874 addons.go:70] Setting ingress-dns=true in profile "addons-903947"
	I1210 23:53:13.249274    5874 addons.go:239] Setting addon ingress-dns=true in "addons-903947"
	I1210 23:53:13.249341    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.249871    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.295442    5874 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 23:53:13.122379    5874 addons.go:70] Setting registry-creds=true in profile "addons-903947"
	I1210 23:53:13.296431    5874 addons.go:239] Setting addon registry-creds=true in "addons-903947"
	I1210 23:53:13.296486    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.297209    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.302719    5874 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 23:53:13.302750    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 23:53:13.302818    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.140674    5874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 23:53:13.323114    5874 addons.go:239] Setting addon default-storageclass=true in "addons-903947"
	I1210 23:53:13.323151    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.323576    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.333501    5874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1210 23:53:13.333839    5874 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 23:53:13.338041    5874 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 23:53:13.338328    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.340886    5874 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 23:53:13.341001    5874 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 23:53:13.343006    5874 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 23:53:13.343160    5874 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 23:53:13.343315    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 23:53:13.349200    5874 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 23:53:13.349234    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 23:53:13.349327    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.349352    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 23:53:13.349366    5874 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 23:53:13.349412    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.351412    5874 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:53:13.354024    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 23:53:13.354176    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.367537    5874 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 23:53:13.367561    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 23:53:13.367633    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.353974    5874 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 23:53:13.370814    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 23:53:13.370878    5874 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 23:53:13.371003    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.353984    5874 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 23:53:13.403092    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 23:53:13.404300    5874 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 23:53:13.404321    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 23:53:13.404393    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.411980    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.414602    5874 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-903947"
	I1210 23:53:13.414647    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:13.415215    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:13.415507    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 23:53:13.419668    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 23:53:13.445130    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 23:53:13.448982    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 23:53:13.451873    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 23:53:13.455223    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 23:53:13.458108    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 23:53:13.463018    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 23:53:13.465956    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 23:53:13.466075    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 23:53:13.466089    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 23:53:13.466184    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.489753    5874 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 23:53:13.489775    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 23:53:13.489862    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.490157    5874 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 23:53:13.506148    5874 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 23:53:13.506413    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 23:53:13.506427    5874 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 23:53:13.506512    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.513696    5874 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 23:53:13.514002    5874 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 23:53:13.514047    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 23:53:13.514116    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.521528    5874 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 23:53:13.521563    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 23:53:13.521634    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.543598    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.552022    5874 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 23:53:13.552048    5874 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 23:53:13.552110    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.595514    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.640723    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.644435    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.651092    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.672442    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.680232    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.720110    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.726258    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.726843    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.737110    5874 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 23:53:13.742038    5874 out.go:179]   - Using image docker.io/busybox:stable
	I1210 23:53:13.743075    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.745407    5874 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 23:53:13.745427    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 23:53:13.745489    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:13.750281    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.761284    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.788774    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:13.801254    5874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 23:53:14.110789    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 23:53:14.174478    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 23:53:14.178746    5874 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 23:53:14.178770    5874 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 23:53:14.266552    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 23:53:14.266575    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 23:53:14.290521    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 23:53:14.293159    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 23:53:14.293182    5874 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 23:53:14.308904    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 23:53:14.314191    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 23:53:14.319593    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 23:53:14.344686    5874 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 23:53:14.344711    5874 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 23:53:14.358097    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 23:53:14.364068    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 23:53:14.368378    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 23:53:14.405685    5874 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 23:53:14.405719    5874 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 23:53:14.416445    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 23:53:14.416525    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 23:53:14.416828    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 23:53:14.416877    5874 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 23:53:14.425231    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 23:53:14.427477    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 23:53:14.427540    5874 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 23:53:14.485562    5874 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 23:53:14.485632    5874 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 23:53:14.533988    5874 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 23:53:14.534057    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 23:53:14.536289    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 23:53:14.536361    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 23:53:14.575790    5874 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.242256988s)
	I1210 23:53:14.575870    5874 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 23:53:14.577167    5874 node_ready.go:35] waiting up to 6m0s for node "addons-903947" to be "Ready" ...
	I1210 23:53:14.584671    5874 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 23:53:14.584743    5874 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 23:53:14.647076    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 23:53:14.647146    5874 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 23:53:14.663790    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 23:53:14.663864    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 23:53:14.666904    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 23:53:14.666985    5874 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 23:53:14.703011    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 23:53:14.803619    5874 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 23:53:14.803691    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 23:53:14.822819    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 23:53:14.822893    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 23:53:14.833514    5874 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 23:53:14.833581    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 23:53:14.836184    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 23:53:14.985017    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 23:53:15.007940    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 23:53:15.015735    5874 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 23:53:15.015822    5874 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 23:53:15.083968    5874 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-903947" context rescaled to 1 replicas
	I1210 23:53:15.262167    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 23:53:15.262239    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 23:53:15.444287    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 23:53:15.444314    5874 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 23:53:15.615295    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 23:53:15.615383    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 23:53:15.900575    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 23:53:15.900647    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 23:53:16.139731    5874 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 23:53:16.139810    5874 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 23:53:16.375722    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1210 23:53:16.587952    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:17.345465    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.170932567s)
	I1210 23:53:17.687900    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.397342619s)
	W1210 23:53:18.589928    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:19.012542    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.698312518s)
	I1210 23:53:19.012670    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.654546341s)
	I1210 23:53:19.012755    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.648663184s)
	I1210 23:53:19.012800    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.644399449s)
	I1210 23:53:19.013114    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.587822015s)
	I1210 23:53:19.013267    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.310186654s)
	I1210 23:53:19.013288    5874 addons.go:495] Verifying addon registry=true in "addons-903947"
	I1210 23:53:19.013502    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.692985747s)
	I1210 23:53:19.013746    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.177488801s)
	I1210 23:53:19.013762    5874 addons.go:495] Verifying addon metrics-server=true in "addons-903947"
	I1210 23:53:19.013829    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.028733845s)
	I1210 23:53:19.013850    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.005837741s)
	W1210 23:53:19.013857    5874 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 23:53:19.013895    5874 retry.go:31] will retry after 167.822431ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 23:53:19.014390    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.705457263s)
	I1210 23:53:19.014417    5874 addons.go:495] Verifying addon ingress=true in "addons-903947"
	I1210 23:53:19.016850    5874 out.go:179] * Verifying registry addon...
	I1210 23:53:19.018724    5874 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-903947 service yakd-dashboard -n yakd-dashboard
	
	I1210 23:53:19.018875    5874 out.go:179] * Verifying ingress addon...
	I1210 23:53:19.022379    5874 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 23:53:19.023375    5874 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1210 23:53:19.038009    5874 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 23:53:19.038134    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:19.038118    5874 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 23:53:19.038203    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1210 23:53:19.044676    5874 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 23:53:19.181993    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 23:53:19.296452    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.920671667s)
	I1210 23:53:19.296491    5874 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-903947"
	I1210 23:53:19.299581    5874 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 23:53:19.304083    5874 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 23:53:19.316891    5874 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 23:53:19.316920    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:19.532243    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:19.532626    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:19.807378    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:20.031720    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:20.031939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:20.307916    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:20.526948    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:20.527573    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:20.808057    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:21.022297    5874 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 23:53:21.022387    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:21.030797    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:21.031685    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:21.041487    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	W1210 23:53:21.085046    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:21.156289    5874 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 23:53:21.173690    5874 addons.go:239] Setting addon gcp-auth=true in "addons-903947"
	I1210 23:53:21.173785    5874 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:53:21.174244    5874 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:53:21.191559    5874 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 23:53:21.191614    5874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:53:21.208835    5874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:53:21.309064    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:21.527380    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:21.527904    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:21.808769    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:21.908302    5874 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.726264863s)
	I1210 23:53:21.911159    5874 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 23:53:21.914149    5874 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 23:53:21.917047    5874 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 23:53:21.917083    5874 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 23:53:21.933719    5874 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 23:53:21.933741    5874 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 23:53:21.946869    5874 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 23:53:21.946896    5874 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 23:53:21.960107    5874 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 23:53:22.027956    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:22.028354    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:22.307630    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:22.484550    5874 addons.go:495] Verifying addon gcp-auth=true in "addons-903947"
	I1210 23:53:22.487652    5874 out.go:179] * Verifying gcp-auth addon...
	I1210 23:53:22.491402    5874 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 23:53:22.497579    5874 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 23:53:22.497603    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:22.525994    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:22.527153    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:22.807554    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:22.995079    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:23.026860    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:23.027121    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:23.306790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:23.499454    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:23.530907    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:23.531522    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:23.579977    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:23.806879    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:23.994760    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:24.025869    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:24.028315    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:24.307445    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:24.494461    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:24.526881    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:24.527270    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:24.807944    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:24.994927    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:25.026153    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:25.027044    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:25.308242    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:25.494575    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:25.525340    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:25.526725    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:25.580197    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:25.807206    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:25.993990    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:26.026584    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:26.026939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:26.307177    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:26.495065    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:26.526186    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:26.526425    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:26.807939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:26.995274    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:27.026747    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:27.026822    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:27.307270    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:27.495499    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:27.526573    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:27.526731    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1210 23:53:27.580468    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:27.807613    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:27.994352    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:28.026839    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:28.027315    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:28.307256    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:28.495111    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:28.530591    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:28.530654    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:28.807737    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:28.994599    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:29.025860    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:29.025999    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:29.307549    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:29.494883    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:29.525435    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:29.526807    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:29.807563    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:29.994394    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:30.030608    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:30.030822    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:30.080726    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:30.307728    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:30.494602    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:30.525224    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:30.527003    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:30.807375    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:30.993999    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:31.025686    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:31.026587    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:31.306853    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:31.495026    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:31.526075    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:31.526217    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:31.806659    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:31.994576    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:32.025482    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:32.026934    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:32.308724    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:32.495119    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:32.525459    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:32.526616    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:32.580350    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:32.807759    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:32.994574    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:33.027259    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:33.027368    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:33.307559    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:33.495574    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:33.525351    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:33.526678    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:33.807433    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:33.994208    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:34.027065    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:34.027502    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:34.307166    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:34.495159    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:34.526473    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:34.526580    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:34.806911    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:34.994691    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:35.026369    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:35.026813    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:35.080641    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:35.307620    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:35.494942    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:35.595881    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:35.596242    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:35.807075    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:35.995176    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:36.026554    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:36.026652    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:36.307084    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:36.495039    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:36.526450    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:36.526915    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:36.808101    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:36.995302    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:37.025918    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:37.027751    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:37.307090    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:37.495186    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:37.526373    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:37.526512    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:37.580136    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:37.807233    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:37.994790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:38.027078    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:38.029643    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:38.307907    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:38.494859    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:38.525893    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:38.526752    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:38.807539    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:38.994041    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:39.026717    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:39.026803    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:39.308257    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:39.495408    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:39.525207    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:39.526405    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:39.807403    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:39.994102    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:40.027832    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:40.028585    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:40.080619    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:40.307644    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:40.494402    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:40.525395    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:40.526023    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:40.808089    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:40.994759    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:41.025676    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:41.026302    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:41.306912    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:41.495098    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:41.525916    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:41.526958    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:41.807560    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:41.994499    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:42.027245    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:42.027443    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1210 23:53:42.081388    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:42.307979    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:42.495010    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:42.525775    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:42.527645    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:42.807524    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:42.994473    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:43.026693    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:43.027001    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:43.307510    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:43.494598    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:43.525935    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:43.526715    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:43.807390    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:43.993861    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:44.026672    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:44.027686    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:44.307653    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:44.494383    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:44.526473    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:44.526540    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:44.581081    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:44.806824    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:44.994889    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:45.031933    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:45.037033    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:45.307908    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:45.494919    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:45.526686    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:45.526835    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:45.808383    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:45.994410    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:46.025550    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:46.027169    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:46.306627    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:46.495065    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:46.526627    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:46.526815    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:46.581448    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:46.807643    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:46.994956    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:47.025899    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:47.028313    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:47.307370    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:47.494307    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:47.527922    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:47.528004    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:47.807416    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:47.995008    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:48.027049    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:48.027200    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:48.307230    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:48.494079    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:48.526116    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:48.526297    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:48.806810    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:48.994811    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:49.026658    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:49.027047    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:49.080604    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:49.307773    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:49.494701    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:49.525377    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:49.526638    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:49.807191    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:49.995025    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:50.025894    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:50.027931    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:50.306783    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:50.494602    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:50.525492    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:50.525978    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:50.806934    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:50.994866    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:51.025737    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:51.026444    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:51.307314    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:51.494101    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:51.526477    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:51.526584    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1210 23:53:51.580283    5874 node_ready.go:57] node "addons-903947" has "Ready":"False" status (will retry)
	I1210 23:53:51.807152    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:51.995393    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:52.026626    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:52.026822    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:52.307529    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:52.494103    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:52.525582    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:52.526857    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:52.807378    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:52.994948    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:53.026135    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:53.025858    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:53.096383    5874 node_ready.go:49] node "addons-903947" is "Ready"
	I1210 23:53:53.096407    5874 node_ready.go:38] duration metric: took 38.519087235s for node "addons-903947" to be "Ready" ...
	I1210 23:53:53.096420    5874 api_server.go:52] waiting for apiserver process to appear ...
	I1210 23:53:53.096474    5874 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 23:53:53.111419    5874 api_server.go:72] duration metric: took 40.013063088s to wait for apiserver process to appear ...
	I1210 23:53:53.111443    5874 api_server.go:88] waiting for apiserver healthz status ...
	I1210 23:53:53.111462    5874 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 23:53:53.176983    5874 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 23:53:53.189155    5874 api_server.go:141] control plane version: v1.34.2
	I1210 23:53:53.189229    5874 api_server.go:131] duration metric: took 77.778365ms to wait for apiserver health ...
	I1210 23:53:53.189253    5874 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 23:53:53.208841    5874 system_pods.go:59] 18 kube-system pods found
	I1210 23:53:53.208921    5874 system_pods.go:61] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending
	I1210 23:53:53.208944    5874 system_pods.go:61] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending
	I1210 23:53:53.208987    5874 system_pods.go:61] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending
	I1210 23:53:53.209015    5874 system_pods.go:61] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.209038    5874 system_pods.go:61] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.209060    5874 system_pods.go:61] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.209093    5874 system_pods.go:61] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.209117    5874 system_pods.go:61] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending
	I1210 23:53:53.209139    5874 system_pods.go:61] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.209161    5874 system_pods.go:61] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.209195    5874 system_pods.go:61] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending
	I1210 23:53:53.209221    5874 system_pods.go:61] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending
	I1210 23:53:53.209243    5874 system_pods.go:61] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending
	I1210 23:53:53.209265    5874 system_pods.go:61] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending
	I1210 23:53:53.209298    5874 system_pods.go:61] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending
	I1210 23:53:53.209324    5874 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending
	I1210 23:53:53.209345    5874 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending
	I1210 23:53:53.209368    5874 system_pods.go:61] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending
	I1210 23:53:53.209404    5874 system_pods.go:74] duration metric: took 20.130213ms to wait for pod list to return data ...
	I1210 23:53:53.209432    5874 default_sa.go:34] waiting for default service account to be created ...
	I1210 23:53:53.213211    5874 default_sa.go:45] found service account: "default"
	I1210 23:53:53.213282    5874 default_sa.go:55] duration metric: took 3.822859ms for default service account to be created ...
	I1210 23:53:53.213305    5874 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 23:53:53.226845    5874 system_pods.go:86] 18 kube-system pods found
	I1210 23:53:53.226928    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending
	I1210 23:53:53.226948    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending
	I1210 23:53:53.226991    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending
	I1210 23:53:53.227017    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.227039    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.227062    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.227095    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.227127    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending
	I1210 23:53:53.227149    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.227173    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.227205    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending
	I1210 23:53:53.227231    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending
	I1210 23:53:53.227253    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending
	I1210 23:53:53.227277    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending
	I1210 23:53:53.227314    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending
	I1210 23:53:53.227339    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending
	I1210 23:53:53.227363    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending
	I1210 23:53:53.227386    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending
	I1210 23:53:53.227427    5874 retry.go:31] will retry after 256.634442ms: missing components: kube-dns
	I1210 23:53:53.352983    5874 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 23:53:53.353046    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:53.572938    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:53.573029    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:53.573050    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending
	I1210 23:53:53.573090    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending
	I1210 23:53:53.573119    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending
	I1210 23:53:53.573138    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.573158    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.573190    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.573215    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.573239    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:53.573262    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.573297    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.573322    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:53.573344    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending
	I1210 23:53:53.573363    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending
	I1210 23:53:53.573383    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:53.573417    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending
	I1210 23:53:53.573437    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending
	I1210 23:53:53.573459    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending
	I1210 23:53:53.573492    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:53.573532    5874 retry.go:31] will retry after 260.228046ms: missing components: kube-dns
	I1210 23:53:53.598191    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:53.605895    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:53.609782    5874 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 23:53:53.609802    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:53.807704    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:53.842016    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:53.842123    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:53.842165    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 23:53:53.842195    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 23:53:53.842223    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 23:53:53.842248    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:53.842279    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:53.842305    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:53.842325    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:53.842352    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:53.842384    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:53.842411    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:53.842435    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:53.842462    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 23:53:53.842499    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 23:53:53.842531    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:53.842558    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 23:53:53.842580    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:53.842614    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:53.842644    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:53.842676    5874 retry.go:31] will retry after 426.208397ms: missing components: kube-dns
	I1210 23:53:53.996079    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:54.100327    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:54.100548    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:54.273626    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:54.273660    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:54.273670    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 23:53:54.273678    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 23:53:54.273686    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 23:53:54.273691    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:54.273695    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:54.273699    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:54.273703    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:54.273709    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:54.273715    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:54.273719    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:54.273724    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:54.273731    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 23:53:54.273737    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 23:53:54.273744    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:54.273750    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 23:53:54.273757    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.273768    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.273774    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:54.273789    5874 retry.go:31] will retry after 419.085032ms: missing components: kube-dns
	I1210 23:53:54.307962    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:54.495395    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:54.526660    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:54.528512    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:54.709010    5874 system_pods.go:86] 19 kube-system pods found
	I1210 23:53:54.709098    5874 system_pods.go:89] "coredns-66bc5c9577-d2djj" [02373a22-aadd-4957-b730-4307c8878d87] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 23:53:54.709123    5874 system_pods.go:89] "csi-hostpath-attacher-0" [629001a3-d7c0-4dae-b22b-b42ae05233f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 23:53:54.709166    5874 system_pods.go:89] "csi-hostpath-resizer-0" [33103ba8-a234-4346-a2a7-2cb615d7ba15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 23:53:54.709196    5874 system_pods.go:89] "csi-hostpathplugin-4lrsf" [140b0df9-4423-479e-adbd-c9afac72b649] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 23:53:54.709221    5874 system_pods.go:89] "etcd-addons-903947" [3a99ccc3-2c4d-4300-abba-d8e35e84f311] Running
	I1210 23:53:54.709245    5874 system_pods.go:89] "kindnet-mqqrh" [bf557c1b-6a17-46bd-a187-2b041b795576] Running
	I1210 23:53:54.709278    5874 system_pods.go:89] "kube-apiserver-addons-903947" [b916f600-4ff6-41a6-aac9-f6fba778d89f] Running
	I1210 23:53:54.709307    5874 system_pods.go:89] "kube-controller-manager-addons-903947" [e40a8a23-75d5-430e-81e8-3b70b414d9f2] Running
	I1210 23:53:54.709333    5874 system_pods.go:89] "kube-ingress-dns-minikube" [1b3dbf34-9147-4e38-84c2-6006b3b6f91b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 23:53:54.709351    5874 system_pods.go:89] "kube-proxy-c2rd4" [a85fdc8c-074d-4394-9730-d62027f7afd3] Running
	I1210 23:53:54.709384    5874 system_pods.go:89] "kube-scheduler-addons-903947" [d83b326f-4ee3-4e78-bfea-3daf32f0d8e6] Running
	I1210 23:53:54.709413    5874 system_pods.go:89] "metrics-server-85b7d694d7-5hpfv" [938a63a2-7347-454c-8f66-b1532ebbea30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 23:53:54.709439    5874 system_pods.go:89] "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 23:53:54.709463    5874 system_pods.go:89] "registry-6b586f9694-84lmh" [daa0b332-89ce-41d7-92b0-a3bb47e220ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 23:53:54.709499    5874 system_pods.go:89] "registry-creds-764b6fb674-jkt4x" [b8da8abc-7964-4c10-95ce-1b6e0189c8c5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 23:53:54.709530    5874 system_pods.go:89] "registry-proxy-pnxjr" [3e4d31cb-22fd-4f8a-be82-556ef4685dc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 23:53:54.709556    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2r8cx" [b44fb5e6-7cfd-4a3c-9fa7-3334caa055be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.709581    5874 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4gxqm" [825dc921-9cbd-43b5-986e-61e167e42b91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 23:53:54.709616    5874 system_pods.go:89] "storage-provisioner" [31a56daf-bf8d-403c-bad7-e13d1343648e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 23:53:54.709649    5874 system_pods.go:126] duration metric: took 1.496323695s to wait for k8s-apps to be running ...
	I1210 23:53:54.709673    5874 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 23:53:54.709756    5874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 23:53:54.738078    5874 system_svc.go:56] duration metric: took 28.396128ms WaitForService to wait for kubelet
	I1210 23:53:54.738146    5874 kubeadm.go:587] duration metric: took 41.639794515s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 23:53:54.738181    5874 node_conditions.go:102] verifying NodePressure condition ...
	I1210 23:53:54.804466    5874 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 23:53:54.804543    5874 node_conditions.go:123] node cpu capacity is 2
	I1210 23:53:54.804572    5874 node_conditions.go:105] duration metric: took 66.367875ms to run NodePressure ...
	I1210 23:53:54.804599    5874 start.go:242] waiting for startup goroutines ...
	I1210 23:53:54.839932    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:54.999606    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:55.027546    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:55.029060    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:55.307307    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:55.493929    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:55.525566    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:55.527584    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:55.811431    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:55.997429    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:56.032989    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:56.035248    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:56.307944    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:56.495837    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:56.528020    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:56.528255    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:56.808290    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:56.994430    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:57.095442    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:57.095520    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:57.308084    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:57.495147    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:57.528023    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:57.528277    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:57.808320    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:57.994621    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:58.030773    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:58.030959    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:58.307857    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:58.495529    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:58.527813    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:58.528615    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:58.815710    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:58.994757    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:59.029639    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:59.031757    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:59.312638    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:59.496484    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:53:59.528958    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:53:59.529568    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:53:59.814132    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:53:59.994570    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:00.035223    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:00.054266    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:00.328628    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:00.495553    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:00.528947    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:00.529460    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:00.816183    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:00.996153    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:01.096596    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:01.096960    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:01.308035    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:01.494771    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:01.526361    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:01.526820    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:01.808415    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:01.994451    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:02.028096    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:02.031750    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:02.308320    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:02.495548    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:02.527830    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:02.528560    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:02.808390    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:02.995030    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:03.027604    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:03.028103    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:03.308268    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:03.495853    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:03.527790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:03.528878    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:03.808309    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:03.995302    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:04.027271    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:04.027558    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:04.307562    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:04.494506    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:04.526654    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:04.526806    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:04.809070    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:04.995298    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:05.028562    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:05.028647    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:05.308159    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:05.495448    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:05.527328    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:05.527492    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:05.808310    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:05.994296    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:06.030101    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:06.030250    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:06.308452    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:06.494759    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:06.527072    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:06.527355    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:06.808497    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:06.994609    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:07.026038    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:07.029109    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:07.308190    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:07.494381    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:07.526804    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:07.527338    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:07.807487    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:07.994144    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:08.027133    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:08.028136    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:08.307613    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:08.495088    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:08.596749    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:08.596836    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:08.808529    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:08.994455    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:09.028160    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:09.028669    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:09.308791    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:09.495174    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:09.527886    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:09.528258    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:09.808842    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:09.994611    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:10.026933    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:10.027130    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:10.307963    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:10.494701    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:10.527340    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:10.527654    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:10.809980    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:10.994820    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:11.025798    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:11.027585    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:11.308214    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:11.494013    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:11.527562    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:11.528262    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:11.813738    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:11.994572    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:12.027499    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:12.027453    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:12.308366    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:12.495705    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:12.526767    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:12.526802    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:12.808035    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:12.995067    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:13.027126    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:13.027213    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:13.308332    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:13.495216    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:13.529082    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:13.529712    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:13.808805    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:13.994905    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:14.027185    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:14.028744    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:14.310875    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:14.495234    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:14.526525    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:14.527022    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:14.808041    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:14.994862    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:15.027718    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:15.030913    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:15.308505    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:15.494186    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:15.527478    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:15.527773    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:15.808785    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:15.995755    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:16.097790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:16.098001    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:16.319285    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:16.494597    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:16.525696    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:16.527192    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:16.807589    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:16.995054    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:17.027392    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:17.027786    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:17.308276    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:17.495927    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:17.527504    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:17.528580    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:17.808117    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:17.994996    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:18.036514    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:18.036860    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:18.307953    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:18.495179    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:18.527640    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:18.528680    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:18.808257    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:18.995506    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:19.028193    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:19.028740    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:19.307962    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:19.495439    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:19.529260    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:19.529658    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:19.807939    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:19.994378    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:20.028143    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:20.028321    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:20.312882    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:20.494207    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:20.526458    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:20.526861    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:20.807796    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:20.995112    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:21.026534    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:21.027331    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:21.309109    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:21.495088    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:21.526289    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:21.526883    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:21.807751    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:21.994895    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:22.096985    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:22.097530    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:22.309040    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:22.494922    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:22.528646    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:22.529144    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:22.811863    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:22.995440    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:23.027256    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:23.028088    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:23.308019    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:23.496022    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:23.526932    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:23.527146    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 23:54:23.808638    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:23.994934    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:24.027611    5874 kapi.go:107] duration metric: took 1m5.005227236s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 23:54:24.028004    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:24.308541    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:24.494906    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:24.527371    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:24.808747    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:24.995279    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:25.027882    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:25.307799    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:25.495017    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:25.526690    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:25.807736    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:25.994855    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:26.096241    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:26.307987    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:26.495614    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:26.527176    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:26.808541    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:26.994649    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:27.026648    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:27.308254    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:27.494942    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:27.527509    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:27.808004    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:27.995294    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:28.027751    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:28.307951    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:28.495338    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:28.527967    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:28.808211    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:28.995642    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:29.026746    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:29.307359    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:29.495260    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:29.526993    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:29.807356    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:29.995077    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:30.032070    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:30.308817    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:30.495082    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:30.526229    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:30.807070    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:30.995452    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:31.028257    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:31.308214    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:31.494863    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:31.526592    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:31.807722    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:31.994311    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:32.027517    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:32.310794    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:32.496513    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:32.599856    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:32.808003    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:32.997603    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:33.032585    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:33.309437    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:33.495618    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:33.531023    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:33.808423    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:33.994651    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:34.027620    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:34.307964    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:34.495106    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:34.526697    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:34.808578    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:34.995268    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:35.028026    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:35.307790    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:35.494636    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:35.526853    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:35.808255    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:35.995339    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:36.029852    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:36.307270    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:36.494479    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:36.526902    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:36.808523    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:36.994189    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:37.027686    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:37.308726    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:37.495071    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:37.526751    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:37.808732    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:37.994440    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:38.027182    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:38.307962    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:38.495420    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:38.527002    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:38.811652    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:39.014612    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:39.030241    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:39.309417    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:39.494054    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:39.526780    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:39.808721    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:39.996928    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:40.030002    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:40.311877    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:40.497801    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:40.526477    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:40.807259    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:40.994585    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:41.027340    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:41.307314    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:41.494342    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:41.526914    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:41.809439    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:42.013819    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 23:54:42.052035    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:42.309610    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:42.495003    5874 kapi.go:107] duration metric: took 1m20.003567592s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 23:54:42.498571    5874 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-903947 cluster.
	I1210 23:54:42.501693    5874 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 23:54:42.505255    5874 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 23:54:42.526874    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:42.808628    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:43.031803    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:43.308516    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:43.527700    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:43.811525    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:44.029735    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:44.308297    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:44.527396    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:44.807673    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:45.033640    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:45.309942    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:45.527426    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:45.808296    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:46.027183    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:46.308137    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:46.528468    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:46.812201    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:47.029158    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:47.308612    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:47.527442    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:47.807698    5874 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 23:54:48.030173    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:48.308202    5874 kapi.go:107] duration metric: took 1m29.004121483s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 23:54:48.527023    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:49.026774    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:49.528563    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:50.027771    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:50.527293    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:51.028743    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:51.527201    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:52.027653    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:52.527097    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:53.026603    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:53.526948    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:54.026346    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:54.527054    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:55.026688    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:55.527197    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:56.026287    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:56.526500    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:57.027436    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:57.527314    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:58.027552    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:58.527378    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:59.026523    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:54:59.526626    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:00.048240    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:00.527394    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:01.027173    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:01.526852    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:02.028074    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:02.529310    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:03.027666    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:03.527323    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:04.027244    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:04.531065    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:05.027607    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:05.528089    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:06.027643    5874 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 23:55:06.527641    5874 kapi.go:107] duration metric: took 1m47.50426509s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 23:55:06.530868    5874 out.go:179] * Enabled addons: nvidia-device-plugin, inspektor-gadget, storage-provisioner, cloud-spanner, ingress-dns, registry-creds, amd-gpu-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1210 23:55:06.533824    5874 addons.go:530] duration metric: took 1m53.435187373s for enable addons: enabled=[nvidia-device-plugin inspektor-gadget storage-provisioner cloud-spanner ingress-dns registry-creds amd-gpu-device-plugin metrics-server yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1210 23:55:06.533888    5874 start.go:247] waiting for cluster config update ...
	I1210 23:55:06.533915    5874 start.go:256] writing updated cluster config ...
	I1210 23:55:06.534209    5874 ssh_runner.go:195] Run: rm -f paused
	I1210 23:55:06.540119    5874 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:55:06.627543    5874 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d2djj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.633582    5874 pod_ready.go:94] pod "coredns-66bc5c9577-d2djj" is "Ready"
	I1210 23:55:06.633611    5874 pod_ready.go:86] duration metric: took 6.039113ms for pod "coredns-66bc5c9577-d2djj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.635958    5874 pod_ready.go:83] waiting for pod "etcd-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.640656    5874 pod_ready.go:94] pod "etcd-addons-903947" is "Ready"
	I1210 23:55:06.640688    5874 pod_ready.go:86] duration metric: took 4.707486ms for pod "etcd-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.642998    5874 pod_ready.go:83] waiting for pod "kube-apiserver-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.650127    5874 pod_ready.go:94] pod "kube-apiserver-addons-903947" is "Ready"
	I1210 23:55:06.650155    5874 pod_ready.go:86] duration metric: took 7.133548ms for pod "kube-apiserver-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.652791    5874 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:06.943811    5874 pod_ready.go:94] pod "kube-controller-manager-addons-903947" is "Ready"
	I1210 23:55:06.943840    5874 pod_ready.go:86] duration metric: took 291.01994ms for pod "kube-controller-manager-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:07.144622    5874 pod_ready.go:83] waiting for pod "kube-proxy-c2rd4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:07.544140    5874 pod_ready.go:94] pod "kube-proxy-c2rd4" is "Ready"
	I1210 23:55:07.544171    5874 pod_ready.go:86] duration metric: took 399.515197ms for pod "kube-proxy-c2rd4" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:07.744697    5874 pod_ready.go:83] waiting for pod "kube-scheduler-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:08.144244    5874 pod_ready.go:94] pod "kube-scheduler-addons-903947" is "Ready"
	I1210 23:55:08.144276    5874 pod_ready.go:86] duration metric: took 399.544373ms for pod "kube-scheduler-addons-903947" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 23:55:08.144290    5874 pod_ready.go:40] duration metric: took 1.60414067s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 23:55:08.529010    5874 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 23:55:08.535663    5874 out.go:179] * Done! kubectl is now configured to use "addons-903947" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 23:55:07 addons-903947 crio[835]: time="2025-12-10T23:55:07.491239636Z" level=info msg="Stopped pod sandbox (already stopped): efc20ef570d4bbe5203bac197a272a13a5a052cdea142551c0636573c616b0f2" id=cdb598c0-d7c4-4bbe-a558-99605cf91f55 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 10 23:55:07 addons-903947 crio[835]: time="2025-12-10T23:55:07.491591344Z" level=info msg="Removing pod sandbox: efc20ef570d4bbe5203bac197a272a13a5a052cdea142551c0636573c616b0f2" id=f3396ce0-edbe-4ef5-b345-89f4758acbd4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 10 23:55:07 addons-903947 crio[835]: time="2025-12-10T23:55:07.496502308Z" level=info msg="Removed pod sandbox: efc20ef570d4bbe5203bac197a272a13a5a052cdea142551c0636573c616b0f2" id=f3396ce0-edbe-4ef5-b345-89f4758acbd4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.85930755Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b8fdae2f-8c94-4ad6-b2b3-41025b353615 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.859385009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.867258727Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:071f723352513dbe1bd86231542cdeedad1d3a30433b4d8d0522cbff4eb9329a UID:be200ed0-5d73-4ca5-a017-389e615081d5 NetNS:/var/run/netns/675fbc5d-123b-4031-b028-c5250492b454 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db98}] Aliases:map[]}"
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.867301059Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.88264973Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:071f723352513dbe1bd86231542cdeedad1d3a30433b4d8d0522cbff4eb9329a UID:be200ed0-5d73-4ca5-a017-389e615081d5 NetNS:/var/run/netns/675fbc5d-123b-4031-b028-c5250492b454 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012db98}] Aliases:map[]}"
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.882800906Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.886740518Z" level=info msg="Ran pod sandbox 071f723352513dbe1bd86231542cdeedad1d3a30433b4d8d0522cbff4eb9329a with infra container: default/busybox/POD" id=b8fdae2f-8c94-4ad6-b2b3-41025b353615 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.88948929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5cb84194-ecc3-4d97-84b5-1605cf96b55a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.889760929Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5cb84194-ecc3-4d97-84b5-1605cf96b55a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.889886759Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5cb84194-ecc3-4d97-84b5-1605cf96b55a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.892313929Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fa279997-90d5-4807-a749-fcff1cccfc15 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:55:09 addons-903947 crio[835]: time="2025-12-10T23:55:09.902796928Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.899171818Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=fa279997-90d5-4807-a749-fcff1cccfc15 name=/runtime.v1.ImageService/PullImage
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.899855483Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a46a18c1-aecf-47ab-81be-740438e6a0e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.90280231Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3eafbede-8ab5-42db-8efa-ee24bc5d39a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.909754278Z" level=info msg="Creating container: default/busybox/busybox" id=ed1fbf0a-5083-4bdd-9373-4873fb3522ef name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.910035854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.91949214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.920236993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.937680422Z" level=info msg="Created container f5cf7c2c09d285776a16e8735b5a34fc508e3ec127636ff72990c11421360f66: default/busybox/busybox" id=ed1fbf0a-5083-4bdd-9373-4873fb3522ef name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.938341752Z" level=info msg="Starting container: f5cf7c2c09d285776a16e8735b5a34fc508e3ec127636ff72990c11421360f66" id=8ef7947e-66af-4738-8ba3-c8224717c590 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 23:55:11 addons-903947 crio[835]: time="2025-12-10T23:55:11.940008431Z" level=info msg="Started container" PID=4990 containerID=f5cf7c2c09d285776a16e8735b5a34fc508e3ec127636ff72990c11421360f66 description=default/busybox/busybox id=8ef7947e-66af-4738-8ba3-c8224717c590 name=/runtime.v1.RuntimeService/StartContainer sandboxID=071f723352513dbe1bd86231542cdeedad1d3a30433b4d8d0522cbff4eb9329a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	f5cf7c2c09d28       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   071f723352513       busybox                                     default
	151c5e31ce828       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             15 seconds ago       Running             controller                               0                   16bf235c27ccc       ingress-nginx-controller-85d4c799dd-wgvkv   ingress-nginx
	943aa1912d4eb       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          33 seconds ago       Running             csi-snapshotter                          0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	3b5f3211aef97       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          35 seconds ago       Running             csi-provisioner                          0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	994a8f897438c       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            36 seconds ago       Running             liveness-probe                           0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	5aabb9a953d20       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           37 seconds ago       Running             hostpath                                 0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	e2cab701ed360       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 39 seconds ago       Running             gcp-auth                                 0                   a58185ed1a522       gcp-auth-78565c9fb4-fnrhq                   gcp-auth
	b2837952c0dbd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            42 seconds ago       Running             gadget                                   0                   b8b49d007d61e       gadget-fqvh4                                gadget
	c4e4cea51bd36       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                45 seconds ago       Running             node-driver-registrar                    0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	d2f52d911b3ce       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             46 seconds ago       Running             local-path-provisioner                   0                   d65b4f139b68a       local-path-provisioner-648f6765c9-tcz2p     local-path-storage
	ab85dca695b3d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   47 seconds ago       Exited              patch                                    0                   3d9576a622130       ingress-nginx-admission-patch-7tj25         ingress-nginx
	c0a8cc7c2669f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   48 seconds ago       Exited              create                                   0                   8d2ea723921b0       ingress-nginx-admission-create-7klqt        ingress-nginx
	8cb1a16ef86ba       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               48 seconds ago       Running             minikube-ingress-dns                     0                   47087c6174d0f       kube-ingress-dns-minikube                   kube-system
	a629933611fce       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              57 seconds ago       Running             registry-proxy                           0                   0629355522fed       registry-proxy-pnxjr                        kube-system
	025942b6fe499       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   e763fb924970e       nvidia-device-plugin-daemonset-mpzgr        kube-system
	f5c570a6481f2       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   9b829e44e9068       registry-6b586f9694-84lmh                   kube-system
	56a6cc123f59d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   31ef87f8e9406       csi-hostpathplugin-4lrsf                    kube-system
	9b51fb4b4cd2a       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   23e51ff8331a7       csi-hostpath-attacher-0                     kube-system
	d16bf5857a0b5       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   6942e6be91b9f       snapshot-controller-7d9fbc56b8-4gxqm        kube-system
	740d4f99a5749       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   c7e8c0da85f1a       yakd-dashboard-5ff678cb9-t5k9j              yakd-dashboard
	be88179f8ab31       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   586c54945de3c       snapshot-controller-7d9fbc56b8-2r8cx        kube-system
	d84d9d1bef357       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   a7235df2c4b96       csi-hostpath-resizer-0                      kube-system
	15faddfa8e68f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago   Running             cloud-spanner-emulator                   0                   55d0b5df7b414       cloud-spanner-emulator-5bdddb765-brsbh      default
	245f22fe409d8       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   ea073d4f2cdb2       metrics-server-85b7d694d7-5hpfv             kube-system
	976bb3f5e7f34       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   ec0db5b7d9cbe       storage-provisioner                         kube-system
	09d359052fb27       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   9c15f169ffc33       coredns-66bc5c9577-d2djj                    kube-system
	98be1397391f4       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago        Running             kube-proxy                               0                   aff6b7fc5fc3e       kube-proxy-c2rd4                            kube-system
	49bf5d15dca73       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   0ec0eae9188af       kindnet-mqqrh                               kube-system
	97d59cbad9439       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   acd5fac1b2212       kube-scheduler-addons-903947                kube-system
	ff91a260c6d64       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   717ba32271758       kube-apiserver-addons-903947                kube-system
	b2794babaa59b       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   7c71e2cb1fe75       etcd-addons-903947                          kube-system
	479052386d5c3       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   f2b6d1dea9eed       kube-controller-manager-addons-903947       kube-system
	
	
	==> coredns [09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55] <==
	[INFO] 10.244.0.15:50876 - 431 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073348s
	[INFO] 10.244.0.15:50876 - 64826 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002049003s
	[INFO] 10.244.0.15:50876 - 59005 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00292025s
	[INFO] 10.244.0.15:50876 - 12029 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000190141s
	[INFO] 10.244.0.15:50876 - 39032 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000318349s
	[INFO] 10.244.0.15:60459 - 17450 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000147006s
	[INFO] 10.244.0.15:60459 - 17253 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169472s
	[INFO] 10.244.0.15:46137 - 10544 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116105s
	[INFO] 10.244.0.15:46137 - 10355 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011878s
	[INFO] 10.244.0.15:46589 - 62145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115933s
	[INFO] 10.244.0.15:46589 - 61701 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087715s
	[INFO] 10.244.0.15:37135 - 42406 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001741558s
	[INFO] 10.244.0.15:37135 - 42595 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001844626s
	[INFO] 10.244.0.15:46508 - 32453 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012084s
	[INFO] 10.244.0.15:46508 - 32041 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000199208s
	[INFO] 10.244.0.20:49219 - 58195 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000226951s
	[INFO] 10.244.0.20:60921 - 4921 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000143988s
	[INFO] 10.244.0.20:33808 - 8961 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000256187s
	[INFO] 10.244.0.20:44811 - 39714 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000331889s
	[INFO] 10.244.0.20:33189 - 20515 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000220994s
	[INFO] 10.244.0.20:39497 - 30494 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000219419s
	[INFO] 10.244.0.20:36426 - 4120 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002246943s
	[INFO] 10.244.0.20:48882 - 36512 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00248123s
	[INFO] 10.244.0.20:42955 - 9144 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001977061s
	[INFO] 10.244.0.20:55999 - 48585 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002562061s
	
	
	==> describe nodes <==
	Name:               addons-903947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-903947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=addons-903947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T23_53_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-903947
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-903947"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 23:53:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-903947
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 23:55:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 23:55:09 +0000   Wed, 10 Dec 2025 23:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 23:55:09 +0000   Wed, 10 Dec 2025 23:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 23:55:09 +0000   Wed, 10 Dec 2025 23:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 23:55:09 +0000   Wed, 10 Dec 2025 23:53:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-903947
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                c7969298-bf03-4ca2-bd93-d9f79dc1e090
	  Boot ID:                    0edab61d-52b1-4525-85dd-848bc0b1d36e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-brsbh       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  gadget                      gadget-fqvh4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  gcp-auth                    gcp-auth-78565c9fb4-fnrhq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-wgvkv    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m3s
	  kube-system                 coredns-66bc5c9577-d2djj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m9s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpathplugin-4lrsf                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 etcd-addons-903947                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m14s
	  kube-system                 kindnet-mqqrh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-addons-903947                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-addons-903947        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-c2rd4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-addons-903947                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 metrics-server-85b7d694d7-5hpfv              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m3s
	  kube-system                 nvidia-device-plugin-daemonset-mpzgr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 registry-6b586f9694-84lmh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 registry-creds-764b6fb674-jkt4x              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 registry-proxy-pnxjr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-2r8cx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 snapshot-controller-7d9fbc56b8-4gxqm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  local-path-storage          local-path-provisioner-648f6765c9-tcz2p      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-t5k9j               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m8s   kube-proxy       
	  Normal   Starting                 2m14s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m14s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m14s  kubelet          Node addons-903947 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m14s  kubelet          Node addons-903947 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m14s  kubelet          Node addons-903947 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m10s  node-controller  Node addons-903947 event: Registered Node addons-903947 in Controller
	  Normal   NodeReady                88s    kubelet          Node addons-903947 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259] <==
	{"level":"warn","ts":"2025-12-10T23:53:03.180004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.192759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.248009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.284145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.296838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.339835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.374857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.390064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.431448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.460397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.499034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.534465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.556335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.610659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.634962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.694273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.720965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.761556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:03.926856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:19.511496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:19.530605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.702043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.717571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.745627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T23:53:41.765901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46420","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [e2cab701ed36062191e22a5d185507b2e4d51a30d9853eb7ceb9b08d9663ccf9] <==
	2025/12/10 23:54:41 GCP Auth Webhook started!
	2025/12/10 23:55:09 Ready to marshal response ...
	2025/12/10 23:55:09 Ready to write response ...
	2025/12/10 23:55:09 Ready to marshal response ...
	2025/12/10 23:55:09 Ready to write response ...
	2025/12/10 23:55:09 Ready to marshal response ...
	2025/12/10 23:55:09 Ready to write response ...
	
	
	==> kernel <==
	 23:55:21 up 6 min,  0 user,  load average: 2.33, 1.20, 0.52
	Linux addons-903947 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0] <==
	E1210 23:53:42.637734       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 23:53:42.638918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 23:53:42.639046       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1210 23:53:43.937754       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 23:53:43.937845       1 metrics.go:72] Registering metrics
	I1210 23:53:43.937915       1 controller.go:711] "Syncing nftables rules"
	E1210 23:53:43.938016       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1210 23:53:52.636257       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:53:52.636317       1 main.go:301] handling current node
	I1210 23:54:02.631534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:54:02.631583       1 main.go:301] handling current node
	I1210 23:54:12.631543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:54:12.631571       1 main.go:301] handling current node
	I1210 23:54:22.631488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:54:22.631521       1 main.go:301] handling current node
	I1210 23:54:32.639315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:54:32.639351       1 main.go:301] handling current node
	I1210 23:54:42.632090       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:54:42.632158       1 main.go:301] handling current node
	I1210 23:54:52.632147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:54:52.632231       1 main.go:301] handling current node
	I1210 23:55:02.633328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:55:02.633393       1 main.go:301] handling current node
	I1210 23:55:12.631895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 23:55:12.631932       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c] <==
	W1210 23:53:19.510611       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 23:53:19.529250       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1210 23:53:22.346779       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.9.76"}
	W1210 23:53:41.701919       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 23:53:41.717563       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 23:53:41.744941       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 23:53:41.765434       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 23:53:53.132908       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.9.76:443: connect: connection refused
	E1210 23:53:53.132960       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.9.76:443: connect: connection refused" logger="UnhandledError"
	W1210 23:53:53.143258       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.9.76:443: connect: connection refused
	E1210 23:53:53.143354       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.9.76:443: connect: connection refused" logger="UnhandledError"
	W1210 23:53:53.211175       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.9.76:443: connect: connection refused
	E1210 23:53:53.216794       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.9.76:443: connect: connection refused" logger="UnhandledError"
	W1210 23:54:00.811532       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 23:54:00.811608       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 23:54:00.812799       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	E1210 23:54:00.814271       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	E1210 23:54:00.819528       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	E1210 23:54:00.840848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.246.193:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.246.193:443: connect: connection refused" logger="UnhandledError"
	I1210 23:54:00.997991       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 23:55:19.077905       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38546: use of closed network connection
	E1210 23:55:19.216172       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38572: use of closed network connection
	
	
	==> kube-controller-manager [479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0] <==
	I1210 23:53:11.714726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1210 23:53:11.714537       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1210 23:53:11.714875       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:53:11.714895       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 23:53:11.714901       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 23:53:11.716095       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1210 23:53:11.717482       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:53:11.717571       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 23:53:11.720660       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 23:53:11.720749       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 23:53:11.720748       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 23:53:11.721277       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 23:53:11.726028       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 23:53:11.731388       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 23:53:11.735651       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 23:53:11.738253       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1210 23:53:17.983496       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 23:53:41.694772       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 23:53:41.694946       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 23:53:41.695016       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 23:53:41.731967       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 23:53:41.736496       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 23:53:41.795310       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 23:53:41.837199       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 23:53:56.707326       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a] <==
	I1210 23:53:12.294755       1 server_linux.go:53] "Using iptables proxy"
	I1210 23:53:12.383436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 23:53:12.491405       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 23:53:12.491438       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 23:53:12.491536       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 23:53:12.634235       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 23:53:12.634411       1 server_linux.go:132] "Using iptables Proxier"
	I1210 23:53:12.643637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 23:53:12.653659       1 server.go:527] "Version info" version="v1.34.2"
	I1210 23:53:12.653689       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 23:53:12.657748       1 config.go:200] "Starting service config controller"
	I1210 23:53:12.657791       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 23:53:12.675308       1 config.go:106] "Starting endpoint slice config controller"
	I1210 23:53:12.675329       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 23:53:12.675375       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 23:53:12.675380       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 23:53:12.676154       1 config.go:309] "Starting node config controller"
	I1210 23:53:12.676163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 23:53:12.676175       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 23:53:12.758338       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 23:53:12.776252       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 23:53:12.776252       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c] <==
	E1210 23:53:04.779541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 23:53:04.779597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 23:53:04.779646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 23:53:04.779694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 23:53:04.779767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 23:53:04.779837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:53:04.779884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 23:53:04.779928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:53:04.779966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 23:53:04.780011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 23:53:04.780056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 23:53:04.780103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 23:53:04.780146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 23:53:04.780194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 23:53:04.780237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 23:53:04.780337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 23:53:04.780394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 23:53:05.605056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 23:53:05.614499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 23:53:05.742024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 23:53:05.772549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 23:53:05.784791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 23:53:05.831363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 23:53:06.083192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 23:53:09.043719       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 23:54:34 addons-903947 kubelet[1298]: I1210 23:54:34.230790    1298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dr52s\" (UniqueName: \"kubernetes.io/projected/5d8d4928-6e66-405c-b641-023bd3d3c12c-kube-api-access-dr52s\") on node \"addons-903947\" DevicePath \"\""
	Dec 10 23:54:34 addons-903947 kubelet[1298]: I1210 23:54:34.938062    1298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2ea723921b056aa8763ce62040ec58deaacef5636271bedd3de29919fbbede"
	Dec 10 23:54:35 addons-903947 kubelet[1298]: I1210 23:54:35.089407    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="local-path-storage/local-path-provisioner-648f6765c9-tcz2p" podStartSLOduration=37.920607914 podStartE2EDuration="1m17.089386864s" podCreationTimestamp="2025-12-10 23:53:18 +0000 UTC" firstStartedPulling="2025-12-10 23:53:54.950126425 +0000 UTC m=+47.651670360" lastFinishedPulling="2025-12-10 23:54:34.118905383 +0000 UTC m=+86.820449310" observedRunningTime="2025-12-10 23:54:34.974897374 +0000 UTC m=+87.676441309" watchObservedRunningTime="2025-12-10 23:54:35.089386864 +0000 UTC m=+87.790930807"
	Dec 10 23:54:35 addons-903947 kubelet[1298]: I1210 23:54:35.338031    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qk7x\" (UniqueName: \"kubernetes.io/projected/26ef1912-59cd-4ace-bed0-25b8a9ef720c-kube-api-access-4qk7x\") pod \"26ef1912-59cd-4ace-bed0-25b8a9ef720c\" (UID: \"26ef1912-59cd-4ace-bed0-25b8a9ef720c\") "
	Dec 10 23:54:35 addons-903947 kubelet[1298]: I1210 23:54:35.341067    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ef1912-59cd-4ace-bed0-25b8a9ef720c-kube-api-access-4qk7x" (OuterVolumeSpecName: "kube-api-access-4qk7x") pod "26ef1912-59cd-4ace-bed0-25b8a9ef720c" (UID: "26ef1912-59cd-4ace-bed0-25b8a9ef720c"). InnerVolumeSpecName "kube-api-access-4qk7x". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 23:54:35 addons-903947 kubelet[1298]: I1210 23:54:35.440291    1298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qk7x\" (UniqueName: \"kubernetes.io/projected/26ef1912-59cd-4ace-bed0-25b8a9ef720c-kube-api-access-4qk7x\") on node \"addons-903947\" DevicePath \"\""
	Dec 10 23:54:35 addons-903947 kubelet[1298]: I1210 23:54:35.970587    1298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d9576a622130c02146a023575f06424da91a1e8c8190dd76fc2648f2b86f47f"
	Dec 10 23:54:41 addons-903947 kubelet[1298]: I1210 23:54:41.860787    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-fqvh4" podStartSLOduration=67.866302291 podStartE2EDuration="1m24.860767338s" podCreationTimestamp="2025-12-10 23:53:17 +0000 UTC" firstStartedPulling="2025-12-10 23:54:21.72128089 +0000 UTC m=+74.422824817" lastFinishedPulling="2025-12-10 23:54:38.715745855 +0000 UTC m=+91.417289864" observedRunningTime="2025-12-10 23:54:39.023583186 +0000 UTC m=+91.725127137" watchObservedRunningTime="2025-12-10 23:54:41.860767338 +0000 UTC m=+94.562311265"
	Dec 10 23:54:42 addons-903947 kubelet[1298]: I1210 23:54:42.021651    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-fnrhq" podStartSLOduration=63.757911968 podStartE2EDuration="1m20.0216281s" podCreationTimestamp="2025-12-10 23:53:22 +0000 UTC" firstStartedPulling="2025-12-10 23:54:25.437576511 +0000 UTC m=+78.139120438" lastFinishedPulling="2025-12-10 23:54:41.701292644 +0000 UTC m=+94.402836570" observedRunningTime="2025-12-10 23:54:42.020319273 +0000 UTC m=+94.721863225" watchObservedRunningTime="2025-12-10 23:54:42.0216281 +0000 UTC m=+94.723172026"
	Dec 10 23:54:43 addons-903947 kubelet[1298]: I1210 23:54:43.445766    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393a752b-e364-4e71-9c31-24907c41f9f0" path="/var/lib/kubelet/pods/393a752b-e364-4e71-9c31-24907c41f9f0/volumes"
	Dec 10 23:54:43 addons-903947 kubelet[1298]: I1210 23:54:43.643319    1298 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 10 23:54:43 addons-903947 kubelet[1298]: I1210 23:54:43.644345    1298 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 10 23:54:48 addons-903947 kubelet[1298]: I1210 23:54:48.097714    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-4lrsf" podStartSLOduration=1.828512782 podStartE2EDuration="55.097695031s" podCreationTimestamp="2025-12-10 23:53:53 +0000 UTC" firstStartedPulling="2025-12-10 23:53:54.012437786 +0000 UTC m=+46.713981713" lastFinishedPulling="2025-12-10 23:54:47.281620035 +0000 UTC m=+99.983163962" observedRunningTime="2025-12-10 23:54:48.079071557 +0000 UTC m=+100.780615525" watchObservedRunningTime="2025-12-10 23:54:48.097695031 +0000 UTC m=+100.799238966"
	Dec 10 23:54:49 addons-903947 kubelet[1298]: I1210 23:54:49.445943    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f95b09c-c7f4-4e2f-a782-d635a797fece" path="/var/lib/kubelet/pods/1f95b09c-c7f4-4e2f-a782-d635a797fece/volumes"
	Dec 10 23:54:57 addons-903947 kubelet[1298]: E1210 23:54:57.942322    1298 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 10 23:54:57 addons-903947 kubelet[1298]: E1210 23:54:57.942418    1298 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8da8abc-7964-4c10-95ce-1b6e0189c8c5-gcr-creds podName:b8da8abc-7964-4c10-95ce-1b6e0189c8c5 nodeName:}" failed. No retries permitted until 2025-12-10 23:56:01.942400007 +0000 UTC m=+174.643943942 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b8da8abc-7964-4c10-95ce-1b6e0189c8c5-gcr-creds") pod "registry-creds-764b6fb674-jkt4x" (UID: "b8da8abc-7964-4c10-95ce-1b6e0189c8c5") : secret "registry-creds-gcr" not found
	Dec 10 23:55:06 addons-903947 kubelet[1298]: I1210 23:55:06.148521    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-wgvkv" podStartSLOduration=101.520990961 podStartE2EDuration="1m48.148503877s" podCreationTimestamp="2025-12-10 23:53:18 +0000 UTC" firstStartedPulling="2025-12-10 23:54:58.582382374 +0000 UTC m=+111.283926301" lastFinishedPulling="2025-12-10 23:55:05.209895282 +0000 UTC m=+117.911439217" observedRunningTime="2025-12-10 23:55:06.145328672 +0000 UTC m=+118.846872607" watchObservedRunningTime="2025-12-10 23:55:06.148503877 +0000 UTC m=+118.850047812"
	Dec 10 23:55:07 addons-903947 kubelet[1298]: I1210 23:55:07.462950    1298 scope.go:117] "RemoveContainer" containerID="203d621b1caed126c536e36b9fd7877943dcb0ce2750db28f22a9bb6b8cfe5a6"
	Dec 10 23:55:07 addons-903947 kubelet[1298]: I1210 23:55:07.473283    1298 scope.go:117] "RemoveContainer" containerID="acddff703b04d9ade68caf779bb07d52448f67dd3e6c5a0f97e76db0759d3505"
	Dec 10 23:55:07 addons-903947 kubelet[1298]: E1210 23:55:07.593041    1298 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/21e956291d68c9c5131af55d96d7b9a3bb787f66732d111066240e5b09b553dc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/21e956291d68c9c5131af55d96d7b9a3bb787f66732d111066240e5b09b553dc/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-nqpgw_1f95b09c-c7f4-4e2f-a782-d635a797fece/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-nqpgw_1f95b09c-c7f4-4e2f-a782-d635a797fece/patch/1.log: no such file or directory
	Dec 10 23:55:07 addons-903947 kubelet[1298]: E1210 23:55:07.595329    1298 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ae7f08f3bcc272f963f6cade97664c66d5b9dfa42ea83ea9adad67da6e615cba/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ae7f08f3bcc272f963f6cade97664c66d5b9dfa42ea83ea9adad67da6e615cba/diff: no such file or directory, extraDiskErr: <nil>
	Dec 10 23:55:09 addons-903947 kubelet[1298]: I1210 23:55:09.665337    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2pxk\" (UniqueName: \"kubernetes.io/projected/be200ed0-5d73-4ca5-a017-389e615081d5-kube-api-access-l2pxk\") pod \"busybox\" (UID: \"be200ed0-5d73-4ca5-a017-389e615081d5\") " pod="default/busybox"
	Dec 10 23:55:09 addons-903947 kubelet[1298]: I1210 23:55:09.665429    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/be200ed0-5d73-4ca5-a017-389e615081d5-gcp-creds\") pod \"busybox\" (UID: \"be200ed0-5d73-4ca5-a017-389e615081d5\") " pod="default/busybox"
	Dec 10 23:55:18 addons-903947 kubelet[1298]: I1210 23:55:18.156721    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=7.146197143 podStartE2EDuration="9.156599358s" podCreationTimestamp="2025-12-10 23:55:09 +0000 UTC" firstStartedPulling="2025-12-10 23:55:09.890316713 +0000 UTC m=+122.591860640" lastFinishedPulling="2025-12-10 23:55:11.90071892 +0000 UTC m=+124.602262855" observedRunningTime="2025-12-10 23:55:12.17316172 +0000 UTC m=+124.874705655" watchObservedRunningTime="2025-12-10 23:55:18.156599358 +0000 UTC m=+130.858143284"
	Dec 10 23:55:21 addons-903947 kubelet[1298]: I1210 23:55:21.443701    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-84lmh" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0] <==
	W1210 23:54:56.483574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:54:58.486746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:54:58.491555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:00.494927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:00.506061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:02.509423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:02.515297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:04.519803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:04.530402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:06.533312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:06.543296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:08.558158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:08.570611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:10.574155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:10.578502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:12.581942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:12.586715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:14.590348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:14.594912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:16.598653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:16.605567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:18.609053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:18.613756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:20.617047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 23:55:20.624647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-903947 -n addons-903947
helpers_test.go:270: (dbg) Run:  kubectl --context addons-903947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25 registry-creds-764b6fb674-jkt4x
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-903947 describe pod ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25 registry-creds-764b6fb674-jkt4x
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-903947 describe pod ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25 registry-creds-764b6fb674-jkt4x: exit status 1 (85.277468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7klqt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7tj25" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-jkt4x" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-903947 describe pod ingress-nginx-admission-create-7klqt ingress-nginx-admission-patch-7tj25 registry-creds-764b6fb674-jkt4x: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable headlamp --alsologtostderr -v=1: exit status 11 (249.650579ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:22.416190   12504 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:22.416439   12504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:22.416474   12504 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:22.416498   12504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:22.416778   12504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:22.417085   12504 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:22.417511   12504 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:22.417559   12504 addons.go:622] checking whether the cluster is paused
	I1210 23:55:22.417696   12504 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:22.417732   12504 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:22.418256   12504 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:22.435639   12504 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:22.435750   12504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:22.458829   12504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:22.561669   12504 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:22.561809   12504 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:22.591263   12504 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:22.591285   12504 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:22.591298   12504 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:22.591306   12504 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:22.591310   12504 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:22.591314   12504 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:22.591317   12504 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:22.591320   12504 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:22.591343   12504 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:22.591356   12504 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:22.591360   12504 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:22.591363   12504 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:22.591366   12504 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:22.591369   12504 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:22.591372   12504 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:22.591378   12504 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:22.591385   12504 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:22.591389   12504 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:22.591392   12504 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:22.591395   12504 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:22.591400   12504 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:22.591403   12504 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:22.591419   12504 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:22.591430   12504 cri.go:89] found id: ""
	I1210 23:55:22.591493   12504 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:22.606358   12504 out.go:203] 
	W1210 23:55:22.609267   12504 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:22.609294   12504 out.go:285] * 
	* 
	W1210 23:55:22.613499   12504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:22.616427   12504 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-brsbh" [5ce8cd5d-8ccc-4fb4-8630-9d19b287f0d1] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003584201s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (253.176ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:40.337319   12978 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:40.337482   12978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:40.337494   12978 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:40.337500   12978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:40.337751   12978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:40.338023   12978 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:40.338389   12978 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:40.338411   12978 addons.go:622] checking whether the cluster is paused
	I1210 23:55:40.338526   12978 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:40.338540   12978 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:40.339061   12978 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:40.356997   12978 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:40.357051   12978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:40.374583   12978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:40.481453   12978 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:40.481541   12978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:40.510801   12978 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:40.510840   12978 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:40.510845   12978 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:40.510849   12978 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:40.510853   12978 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:40.510857   12978 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:40.510860   12978 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:40.510864   12978 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:40.510867   12978 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:40.510882   12978 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:40.510885   12978 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:40.510889   12978 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:40.510892   12978 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:40.510896   12978 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:40.510899   12978 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:40.510907   12978 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:40.510910   12978 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:40.510915   12978 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:40.510918   12978 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:40.510921   12978 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:40.510925   12978 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:40.510928   12978 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:40.510931   12978 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:40.510934   12978 cri.go:89] found id: ""
	I1210 23:55:40.511031   12978 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:40.524564   12978 out.go:203] 
	W1210 23:55:40.525989   12978 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:40.526015   12978 out.go:285] * 
	* 
	W1210 23:55:40.530266   12978 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:40.531546   12978 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-903947 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-903947 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-903947 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [c14430dc-d4b9-477f-ac8f-4ec1d534cfb9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [c14430dc-d4b9-477f-ac8f-4ec1d534cfb9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [c14430dc-d4b9-477f-ac8f-4ec1d534cfb9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003940134s
addons_test.go:969: (dbg) Run:  kubectl --context addons-903947 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 ssh "cat /opt/local-path-provisioner/pvc-45afea10-2a67-458f-9aae-5cd553cc1102_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-903947 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-903947 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (261.160618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:42.930188   13135 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:42.930362   13135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:42.930374   13135 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:42.930380   13135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:42.930737   13135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:42.931509   13135 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:42.931979   13135 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:42.932022   13135 addons.go:622] checking whether the cluster is paused
	I1210 23:55:42.932167   13135 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:42.932197   13135 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:42.932801   13135 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:42.949461   13135 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:42.949531   13135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:42.973758   13135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:43.077777   13135 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:43.077855   13135 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:43.109373   13135 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:43.109393   13135 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:43.109398   13135 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:43.109402   13135 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:43.109405   13135 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:43.109408   13135 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:43.109411   13135 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:43.109414   13135 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:43.109417   13135 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:43.109427   13135 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:43.109431   13135 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:43.109434   13135 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:43.109438   13135 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:43.109441   13135 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:43.109444   13135 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:43.109450   13135 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:43.109453   13135 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:43.109458   13135 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:43.109461   13135 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:43.109464   13135 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:43.109468   13135 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:43.109471   13135 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:43.109474   13135 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:43.109477   13135 cri.go:89] found id: ""
	I1210 23:55:43.109524   13135 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:43.122578   13135 out.go:203] 
	W1210 23:55:43.123617   13135 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:43.123676   13135 out.go:285] * 
	* 
	W1210 23:55:43.128035   13135 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:43.129668   13135 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-mpzgr" [b637406a-12a8-4fbd-a5ce-9d3cb7f9d10b] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003388232s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (376.641144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:34.949051   12811 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:34.949282   12811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:34.949317   12811 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:34.949339   12811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:34.949630   12811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:34.949941   12811 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:34.950381   12811 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:34.950429   12811 addons.go:622] checking whether the cluster is paused
	I1210 23:55:34.950569   12811 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:34.950604   12811 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:34.951186   12811 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:34.969924   12811 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:34.969985   12811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:34.988341   12811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:35.123040   12811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:35.123134   12811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:35.243023   12811 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:35.243045   12811 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:35.243052   12811 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:35.243056   12811 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:35.243060   12811 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:35.243065   12811 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:35.243068   12811 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:35.243076   12811 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:35.243081   12811 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:35.243090   12811 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:35.243094   12811 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:35.243097   12811 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:35.243100   12811 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:35.243103   12811 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:35.243106   12811 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:35.243115   12811 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:35.243118   12811 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:35.243123   12811 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:35.243128   12811 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:35.243132   12811 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:35.243139   12811 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:35.243142   12811 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:35.243145   12811 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:35.243148   12811 cri.go:89] found id: ""
	I1210 23:55:35.243203   12811 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:35.258734   12811 out.go:203] 
	W1210 23:55:35.260105   12811 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:35.260131   12811 out.go:285] * 
	* 
	W1210 23:55:35.264617   12811 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:35.265896   12811 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-t5k9j" [3b51eb46-77bf-4264-8f98-c51dde807bd4] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003532229s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-903947 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-903947 addons disable yakd --alsologtostderr -v=1: exit status 11 (262.472667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 23:55:28.678035   12563 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:55:28.678217   12563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:28.678227   12563 out.go:374] Setting ErrFile to fd 2...
	I1210 23:55:28.678233   12563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:55:28.678478   12563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:55:28.678836   12563 mustload.go:66] Loading cluster: addons-903947
	I1210 23:55:28.679263   12563 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:28.679289   12563 addons.go:622] checking whether the cluster is paused
	I1210 23:55:28.679414   12563 config.go:182] Loaded profile config "addons-903947": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 23:55:28.679429   12563 host.go:66] Checking if "addons-903947" exists ...
	I1210 23:55:28.679952   12563 cli_runner.go:164] Run: docker container inspect addons-903947 --format={{.State.Status}}
	I1210 23:55:28.697785   12563 ssh_runner.go:195] Run: systemctl --version
	I1210 23:55:28.697866   12563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-903947
	I1210 23:55:28.716316   12563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/addons-903947/id_rsa Username:docker}
	I1210 23:55:28.825578   12563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 23:55:28.825653   12563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 23:55:28.857350   12563 cri.go:89] found id: "943aa1912d4ebcf7ec0238b633c1d7c537e987ff2a93f95c852f99286db8ce7e"
	I1210 23:55:28.857426   12563 cri.go:89] found id: "3b5f3211aef973f4d4530875184fc8dc892bfc40aed2e2d4b4a321d149835eef"
	I1210 23:55:28.857446   12563 cri.go:89] found id: "994a8f897438ca5a5c02f01a96792f8fdb5efb2c9096576771d8ef32cefbb066"
	I1210 23:55:28.857467   12563 cri.go:89] found id: "5aabb9a953d205703f722a2ef4262a71db7e9480345a8d9aeac7b06c4cb12268"
	I1210 23:55:28.857505   12563 cri.go:89] found id: "c4e4cea51bd36d2fe08f0b9fcd69fdf236e166df402331d186f745c688897738"
	I1210 23:55:28.857529   12563 cri.go:89] found id: "8cb1a16ef86ba3ffce506676232e5b325ff507483a016dccf541049719bdd745"
	I1210 23:55:28.857550   12563 cri.go:89] found id: "a629933611fcec4e69c39ae2510f01e0421eb7c45e1d56dd53a38d39fd4b7bfe"
	I1210 23:55:28.857586   12563 cri.go:89] found id: "025942b6fe4993541df9a54aa9bacbda46eb72f40226626914c324a9b29ae746"
	I1210 23:55:28.857612   12563 cri.go:89] found id: "f5c570a6481f2c7e4b73e195e78b82c6b6e7a9a4593fb5e6a8ab40d444c4ef16"
	I1210 23:55:28.857636   12563 cri.go:89] found id: "56a6cc123f59d1064e6881245e7159f6c9a6e10816b1ad036c843ad5c06dff5e"
	I1210 23:55:28.857665   12563 cri.go:89] found id: "9b51fb4b4cd2a7f2c3580f3dc81ac134222377f7c46dfbcb09feac151ec1220e"
	I1210 23:55:28.857688   12563 cri.go:89] found id: "d16bf5857a0b5f19f53ffa528b8c2399d3aaa18ed1a42f3831edf6220ba2a131"
	I1210 23:55:28.857709   12563 cri.go:89] found id: "be88179f8ab31e2a9a418e1c9254abbc763c6c5fece1ce83b90e6ecbf9f09b78"
	I1210 23:55:28.857744   12563 cri.go:89] found id: "d84d9d1bef3578da5e08c9c9f7b5cd8c481dcc08cf0e5a4ae9847d54b1516a0b"
	I1210 23:55:28.857766   12563 cri.go:89] found id: "245f22fe409d8aa954d1882b859fe0c50907a3b35bbd8f1e481a1b87abdd1c83"
	I1210 23:55:28.857795   12563 cri.go:89] found id: "976bb3f5e7f34ba2309603a2160716ac0e9ef510d31d1cc558fc5f41d53c7df0"
	I1210 23:55:28.857836   12563 cri.go:89] found id: "09d359052fb270c67012314838ab5c51d5b6e86457a2ad1c48f40c17bbf4bb55"
	I1210 23:55:28.857860   12563 cri.go:89] found id: "98be1397391f42933de9bdfbbee70056b63f2a4f439b831606c084238d99325a"
	I1210 23:55:28.857882   12563 cri.go:89] found id: "49bf5d15dca739da93251d0aeccce3860d67be6cc9b90aa9088144528105cfe0"
	I1210 23:55:28.857915   12563 cri.go:89] found id: "97d59cbad9439d3830923a0bae49bb0c7ce707890747f73c9a949bc955cb590c"
	I1210 23:55:28.857940   12563 cri.go:89] found id: "ff91a260c6d642fbdcae87de07943e5a7fde0e5fd0e3cbe34e0c08011b431b5c"
	I1210 23:55:28.857959   12563 cri.go:89] found id: "b2794babaa59b7d0d13aabeadccd340bb1430ae1ccb73ce446db76a5a1197259"
	I1210 23:55:28.857978   12563 cri.go:89] found id: "479052386d5c3ec4e4f7e408b654e97f7f2cd5a08be361e50ed6aab0f2ec33a0"
	I1210 23:55:28.858014   12563 cri.go:89] found id: ""
	I1210 23:55:28.858096   12563 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 23:55:28.874212   12563 out.go:203] 
	W1210 23:55:28.877071   12563 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:55:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 23:55:28.877100   12563 out.go:285] * 
	* 
	W1210 23:55:28.881495   12563 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 23:55:28.884400   12563 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-903947 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1211 00:05:09.648539    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:05:37.361550    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.216177    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.222599    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.234091    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.255563    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.296983    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.378528    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.540040    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:21.861784    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:22.503878    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:23.785275    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:26.348148    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:31.469517    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:07:41.710961    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:08:02.192402    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:08:43.155102    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:10:05.076681    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:10:09.648558    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.793175641s)

                                                
                                                
-- stdout --
	* [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:44313
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:44313 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-786978 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-786978 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000901162s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001258034s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001258034s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 6 (328.207517ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 00:11:30.322303   38835 status.go:458] kubeconfig endpoint: get endpoint: "functional-786978" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-976823 ssh sudo umount -f /mount-9p                                                                                                    │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdspecific-port511295732/001:/mount-9p --alsologtostderr -v=1 --port 46464                  │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh -- ls -la /mount-9p                                                                                                         │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh sudo umount -f /mount-9p                                                                                                    │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount1 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount3 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount1                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount2 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount2                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh findmnt -T /mount3                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ mount          │ -p functional-976823 --kill=true                                                                                                                  │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format short --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format json --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ ssh            │ functional-976823 ssh pgrep buildkitd                                                                                                             │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ image          │ functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr                                            │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls                                                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format yaml --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format table --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ delete         │ -p functional-976823                                                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ start          │ -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:03:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:03:10.240798   33275 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:03:10.240948   33275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:03:10.240952   33275 out.go:374] Setting ErrFile to fd 2...
	I1211 00:03:10.240956   33275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:03:10.241207   33275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:03:10.241655   33275 out.go:368] Setting JSON to false
	I1211 00:03:10.242469   33275 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":877,"bootTime":1765410514,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:03:10.242525   33275 start.go:143] virtualization:  
	I1211 00:03:10.244377   33275 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:03:10.245983   33275 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:03:10.246166   33275 notify.go:221] Checking for updates...
	I1211 00:03:10.249215   33275 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:03:10.250627   33275 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:03:10.251990   33275 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:03:10.253174   33275 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:03:10.254693   33275 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:03:10.256121   33275 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:03:10.283071   33275 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:03:10.283169   33275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:03:10.348410   33275 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-11 00:03:10.3388938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:03:10.348499   33275 docker.go:319] overlay module found
	I1211 00:03:10.350193   33275 out.go:179] * Using the docker driver based on user configuration
	I1211 00:03:10.351878   33275 start.go:309] selected driver: docker
	I1211 00:03:10.351888   33275 start.go:927] validating driver "docker" against <nil>
	I1211 00:03:10.351899   33275 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:03:10.352645   33275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:03:10.415658   33275 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-11 00:03:10.406925068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:03:10.415802   33275 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1211 00:03:10.416030   33275 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 00:03:10.417534   33275 out.go:179] * Using Docker driver with root privileges
	I1211 00:03:10.418751   33275 cni.go:84] Creating CNI manager for ""
	I1211 00:03:10.418822   33275 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:03:10.418829   33275 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 00:03:10.418904   33275 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:03:10.420475   33275 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:03:10.421454   33275 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:03:10.422727   33275 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:03:10.423992   33275 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:03:10.424035   33275 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:03:10.424043   33275 cache.go:65] Caching tarball of preloaded images
	I1211 00:03:10.424065   33275 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:03:10.424123   33275 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:03:10.424133   33275 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:03:10.424469   33275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:03:10.424488   33275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json: {Name:mke56aa64ee3128e6801875102187e9b7b385d8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:10.445026   33275 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:03:10.445046   33275 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:03:10.445068   33275 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:03:10.445099   33275 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:03:10.445212   33275 start.go:364] duration metric: took 99.293µs to acquireMachinesLock for "functional-786978"
	I1211 00:03:10.445236   33275 start.go:93] Provisioning new machine with config: &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 00:03:10.445298   33275 start.go:125] createHost starting for "" (driver="docker")
	I1211 00:03:10.446946   33275 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1211 00:03:10.447210   33275 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:44313 to docker env.
	I1211 00:03:10.447232   33275 start.go:159] libmachine.API.Create for "functional-786978" (driver="docker")
	I1211 00:03:10.447250   33275 client.go:173] LocalClient.Create starting
	I1211 00:03:10.447304   33275 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem
	I1211 00:03:10.447341   33275 main.go:143] libmachine: Decoding PEM data...
	I1211 00:03:10.447354   33275 main.go:143] libmachine: Parsing certificate...
	I1211 00:03:10.447400   33275 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem
	I1211 00:03:10.447417   33275 main.go:143] libmachine: Decoding PEM data...
	I1211 00:03:10.447427   33275 main.go:143] libmachine: Parsing certificate...
	I1211 00:03:10.447766   33275 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1211 00:03:10.464711   33275 cli_runner.go:211] docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1211 00:03:10.464794   33275 network_create.go:284] running [docker network inspect functional-786978] to gather additional debugging logs...
	I1211 00:03:10.464808   33275 cli_runner.go:164] Run: docker network inspect functional-786978
	W1211 00:03:10.482125   33275 cli_runner.go:211] docker network inspect functional-786978 returned with exit code 1
	I1211 00:03:10.482164   33275 network_create.go:287] error running [docker network inspect functional-786978]: docker network inspect functional-786978: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-786978 not found
	I1211 00:03:10.482180   33275 network_create.go:289] output of [docker network inspect functional-786978]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-786978 not found
	
	** /stderr **
	I1211 00:03:10.482309   33275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:03:10.498948   33275 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019393a0}
	I1211 00:03:10.499001   33275 network_create.go:124] attempt to create docker network functional-786978 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1211 00:03:10.499064   33275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-786978 functional-786978
	I1211 00:03:10.553333   33275 network_create.go:108] docker network functional-786978 192.168.49.0/24 created
	I1211 00:03:10.553355   33275 kic.go:121] calculated static IP "192.168.49.2" for the "functional-786978" container
	I1211 00:03:10.553428   33275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1211 00:03:10.568242   33275 cli_runner.go:164] Run: docker volume create functional-786978 --label name.minikube.sigs.k8s.io=functional-786978 --label created_by.minikube.sigs.k8s.io=true
	I1211 00:03:10.584570   33275 oci.go:103] Successfully created a docker volume functional-786978
	I1211 00:03:10.584658   33275 cli_runner.go:164] Run: docker run --rm --name functional-786978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-786978 --entrypoint /usr/bin/test -v functional-786978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1211 00:03:11.047173   33275 oci.go:107] Successfully prepared a docker volume functional-786978
	I1211 00:03:11.047245   33275 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:03:11.047253   33275 kic.go:194] Starting extracting preloaded images to volume ...
	I1211 00:03:11.047320   33275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-786978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1211 00:03:15.063567   33275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-786978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.016204393s)
	I1211 00:03:15.063588   33275 kic.go:203] duration metric: took 4.016331518s to extract preloaded images to volume ...
	W1211 00:03:15.063729   33275 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1211 00:03:15.063836   33275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1211 00:03:15.127740   33275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-786978 --name functional-786978 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-786978 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-786978 --network functional-786978 --ip 192.168.49.2 --volume functional-786978:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1211 00:03:15.423757   33275 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Running}}
	I1211 00:03:15.445914   33275 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:03:15.478378   33275 cli_runner.go:164] Run: docker exec functional-786978 stat /var/lib/dpkg/alternatives/iptables
	I1211 00:03:15.530604   33275 oci.go:144] the created container "functional-786978" has a running status.
	I1211 00:03:15.530621   33275 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa...
	I1211 00:03:15.807858   33275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1211 00:03:15.829537   33275 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:03:15.854214   33275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1211 00:03:15.854226   33275 kic_runner.go:114] Args: [docker exec --privileged functional-786978 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1211 00:03:15.937116   33275 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:03:15.958939   33275 machine.go:94] provisionDockerMachine start ...
	I1211 00:03:15.959080   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:15.978834   33275 main.go:143] libmachine: Using SSH client type: native
	I1211 00:03:15.979167   33275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:03:15.979174   33275 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:03:15.979879   33275 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1211 00:03:19.130525   33275 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:03:19.130539   33275 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:03:19.130601   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:19.148881   33275 main.go:143] libmachine: Using SSH client type: native
	I1211 00:03:19.149197   33275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:03:19.149204   33275 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:03:19.308525   33275 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:03:19.308595   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:19.327194   33275 main.go:143] libmachine: Using SSH client type: native
	I1211 00:03:19.327507   33275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:03:19.327520   33275 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:03:19.475336   33275 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:03:19.475354   33275 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:03:19.475375   33275 ubuntu.go:190] setting up certificates
	I1211 00:03:19.475384   33275 provision.go:84] configureAuth start
	I1211 00:03:19.475448   33275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:03:19.493366   33275 provision.go:143] copyHostCerts
	I1211 00:03:19.493423   33275 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:03:19.493430   33275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:03:19.493508   33275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:03:19.493611   33275 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:03:19.493615   33275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:03:19.493645   33275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:03:19.493703   33275 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:03:19.493707   33275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:03:19.493733   33275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:03:19.493785   33275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:03:19.850565   33275 provision.go:177] copyRemoteCerts
	I1211 00:03:19.850621   33275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:03:19.850661   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:19.867848   33275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:03:19.970490   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 00:03:19.987513   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:03:20.005120   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:03:20.028979   33275 provision.go:87] duration metric: took 553.571092ms to configureAuth
	I1211 00:03:20.028996   33275 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:03:20.029194   33275 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:03:20.029306   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:20.049042   33275 main.go:143] libmachine: Using SSH client type: native
	I1211 00:03:20.049363   33275 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:03:20.049378   33275 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:03:20.373957   33275 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:03:20.373986   33275 machine.go:97] duration metric: took 4.415020309s to provisionDockerMachine
	I1211 00:03:20.373995   33275 client.go:176] duration metric: took 9.926740814s to LocalClient.Create
	I1211 00:03:20.374021   33275 start.go:167] duration metric: took 9.926788421s to libmachine.API.Create "functional-786978"
	I1211 00:03:20.374028   33275 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:03:20.374038   33275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:03:20.374109   33275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:03:20.374171   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:20.392619   33275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:03:20.503246   33275 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:03:20.506413   33275 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:03:20.506430   33275 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:03:20.506440   33275 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:03:20.506495   33275 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:03:20.506586   33275 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:03:20.506667   33275 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:03:20.506717   33275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:03:20.514393   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:03:20.532669   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:03:20.551170   33275 start.go:296] duration metric: took 177.127744ms for postStartSetup
	I1211 00:03:20.551536   33275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:03:20.568712   33275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:03:20.569010   33275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:03:20.569050   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:20.589021   33275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:03:20.695907   33275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:03:20.700960   33275 start.go:128] duration metric: took 10.255649036s to createHost
	I1211 00:03:20.700975   33275 start.go:83] releasing machines lock for "functional-786978", held for 10.255755977s
	I1211 00:03:20.701047   33275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:03:20.722872   33275 out.go:179] * Found network options:
	I1211 00:03:20.726120   33275 out.go:179]   - HTTP_PROXY=localhost:44313
	W1211 00:03:20.729169   33275 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1211 00:03:20.732023   33275 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1211 00:03:20.735150   33275 ssh_runner.go:195] Run: cat /version.json
	I1211 00:03:20.735194   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:20.735220   33275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:03:20.735269   33275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:03:20.752483   33275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:03:20.758114   33275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:03:20.854880   33275 ssh_runner.go:195] Run: systemctl --version
	I1211 00:03:20.948483   33275 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:03:20.983753   33275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 00:03:20.988024   33275 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:03:20.988095   33275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:03:21.018033   33275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1211 00:03:21.018047   33275 start.go:496] detecting cgroup driver to use...
	I1211 00:03:21.018082   33275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:03:21.018131   33275 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:03:21.037094   33275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:03:21.050289   33275 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:03:21.050359   33275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:03:21.068212   33275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:03:21.087116   33275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:03:21.199268   33275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:03:21.313423   33275 docker.go:234] disabling docker service ...
	I1211 00:03:21.313487   33275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:03:21.334607   33275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:03:21.347658   33275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:03:21.457452   33275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:03:21.588059   33275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:03:21.601163   33275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:03:21.616891   33275 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:03:21.616967   33275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:03:21.625680   33275 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:03:21.625752   33275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:03:21.634873   33275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:03:21.644097   33275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:03:21.653036   33275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:03:21.661365   33275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:03:21.670161   33275 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:03:21.684328   33275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:03:21.693448   33275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:03:21.701186   33275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:03:21.708970   33275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:03:21.817033   33275 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:03:21.993402   33275 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:03:21.993472   33275 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:03:21.997296   33275 start.go:564] Will wait 60s for crictl version
	I1211 00:03:21.997362   33275 ssh_runner.go:195] Run: which crictl
	I1211 00:03:22.000835   33275 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:03:22.027221   33275 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:03:22.027308   33275 ssh_runner.go:195] Run: crio --version
	I1211 00:03:22.056587   33275 ssh_runner.go:195] Run: crio --version
	I1211 00:03:22.088367   33275 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:03:22.091206   33275 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:03:22.108417   33275 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:03:22.112569   33275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 00:03:22.122813   33275 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:03:22.122916   33275 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:03:22.123001   33275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:03:22.161409   33275 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:03:22.161420   33275 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:03:22.161473   33275 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:03:22.187582   33275 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:03:22.187594   33275 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:03:22.187600   33275 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:03:22.187689   33275 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:03:22.187768   33275 ssh_runner.go:195] Run: crio config
	I1211 00:03:22.252973   33275 cni.go:84] Creating CNI manager for ""
	I1211 00:03:22.252989   33275 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:03:22.253006   33275 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:03:22.253039   33275 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:03:22.253228   33275 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:03:22.253315   33275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:03:22.263332   33275 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:03:22.263399   33275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:03:22.270572   33275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:03:22.283064   33275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:03:22.295766   33275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1211 00:03:22.308261   33275 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:03:22.311846   33275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 00:03:22.321239   33275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:03:22.446232   33275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:03:22.462230   33275 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:03:22.462240   33275 certs.go:195] generating shared ca certs ...
	I1211 00:03:22.462256   33275 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:22.462397   33275 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:03:22.462436   33275 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:03:22.462451   33275 certs.go:257] generating profile certs ...
	I1211 00:03:22.462503   33275 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:03:22.462512   33275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt with IP's: []
	I1211 00:03:22.683037   33275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt ...
	I1211 00:03:22.683053   33275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: {Name:mk5e664cd044a4d90393706ffe03ef6ad7786f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:22.683254   33275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key ...
	I1211 00:03:22.683260   33275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key: {Name:mkf18a6d7e77aa4adfeac7d022602d7deef005e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:22.683357   33275 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:03:22.683369   33275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt.47ae6169 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1211 00:03:22.907692   33275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt.47ae6169 ...
	I1211 00:03:22.907715   33275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt.47ae6169: {Name:mk6673848a4154e6ec326274b0875095ad47392c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:22.907892   33275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169 ...
	I1211 00:03:22.907899   33275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169: {Name:mkc1e608f15473ed647466bd3f43870098680821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:22.907981   33275 certs.go:382] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt.47ae6169 -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt
	I1211 00:03:22.908059   33275 certs.go:386] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169 -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key
	I1211 00:03:22.908112   33275 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:03:22.908128   33275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt with IP's: []
	I1211 00:03:23.169679   33275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt ...
	I1211 00:03:23.169694   33275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt: {Name:mk8529e7dcbf7641709710a323e3c91f7758d393 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:23.169856   33275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key ...
	I1211 00:03:23.169864   33275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key: {Name:mkd5c3b0574f1f894c0f6a400657a6cce85634a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:03:23.170035   33275 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:03:23.170074   33275 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:03:23.170082   33275 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:03:23.170108   33275 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:03:23.170132   33275 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:03:23.170156   33275 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:03:23.170197   33275 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:03:23.170792   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:03:23.188509   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:03:23.205732   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:03:23.224007   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:03:23.242699   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:03:23.259933   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:03:23.277865   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:03:23.295159   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:03:23.313538   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:03:23.330780   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:03:23.348114   33275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:03:23.365867   33275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:03:23.378524   33275 ssh_runner.go:195] Run: openssl version
	I1211 00:03:23.384983   33275 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:03:23.392388   33275 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:03:23.399994   33275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:03:23.403656   33275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:03:23.403726   33275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:03:23.445581   33275 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:03:23.453167   33275 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4875.pem /etc/ssl/certs/51391683.0
	I1211 00:03:23.460479   33275 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:03:23.468029   33275 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:03:23.475664   33275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:03:23.479299   33275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:03:23.479363   33275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:03:23.520303   33275 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:03:23.527804   33275 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/48752.pem /etc/ssl/certs/3ec20f2e.0
	I1211 00:03:23.534998   33275 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:03:23.542432   33275 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:03:23.550234   33275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:03:23.554042   33275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:03:23.554097   33275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:03:23.595305   33275 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:03:23.602980   33275 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 00:03:23.610675   33275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:03:23.614274   33275 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 00:03:23.614315   33275 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:03:23.614392   33275 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:03:23.614447   33275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:03:23.644507   33275 cri.go:89] found id: ""
	I1211 00:03:23.644567   33275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:03:23.652358   33275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:03:23.660045   33275 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:03:23.660101   33275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:03:23.667898   33275 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:03:23.667909   33275 kubeadm.go:158] found existing configuration files:
	
	I1211 00:03:23.667961   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:03:23.675669   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:03:23.675721   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:03:23.682828   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:03:23.690398   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:03:23.690454   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:03:23.698014   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:03:23.705767   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:03:23.705826   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:03:23.713244   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:03:23.721085   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:03:23.721156   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:03:23.728700   33275 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:03:23.769846   33275 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:03:23.769896   33275 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:03:23.842690   33275 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:03:23.842754   33275 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:03:23.842789   33275 kubeadm.go:319] OS: Linux
	I1211 00:03:23.842838   33275 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:03:23.842885   33275 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:03:23.842931   33275 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:03:23.842998   33275 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:03:23.843055   33275 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:03:23.843101   33275 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:03:23.843144   33275 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:03:23.843191   33275 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:03:23.843235   33275 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:03:23.920762   33275 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:03:23.920875   33275 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:03:23.920964   33275 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:03:23.930631   33275 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:03:23.937091   33275 out.go:252]   - Generating certificates and keys ...
	I1211 00:03:23.937172   33275 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:03:23.937238   33275 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:03:24.592427   33275 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 00:03:24.889274   33275 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 00:03:25.156474   33275 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 00:03:25.424005   33275 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 00:03:25.686781   33275 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 00:03:25.687053   33275 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-786978 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 00:03:25.783506   33275 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 00:03:25.783655   33275 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-786978 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1211 00:03:25.918045   33275 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 00:03:25.989664   33275 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 00:03:26.139278   33275 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 00:03:26.139585   33275 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:03:26.312110   33275 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:03:26.436878   33275 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:03:26.846444   33275 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:03:26.984974   33275 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:03:27.604400   33275 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:03:27.605275   33275 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:03:27.608180   33275 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:03:27.613628   33275 out.go:252]   - Booting up control plane ...
	I1211 00:03:27.613760   33275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:03:27.613867   33275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:03:27.613933   33275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:03:27.635748   33275 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:03:27.635848   33275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:03:27.644076   33275 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:03:27.645769   33275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:03:27.645895   33275 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:03:27.776160   33275 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:03:27.776271   33275 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:07:27.777005   33275 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000901162s
	I1211 00:07:27.777032   33275 kubeadm.go:319] 
	I1211 00:07:27.777133   33275 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:07:27.777191   33275 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:07:27.777519   33275 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:07:27.777527   33275 kubeadm.go:319] 
	I1211 00:07:27.777723   33275 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:07:27.778012   33275 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:07:27.778067   33275 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:07:27.778071   33275 kubeadm.go:319] 
	I1211 00:07:27.783078   33275 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:07:27.783585   33275 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:07:27.783735   33275 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:07:27.784015   33275 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1211 00:07:27.784022   33275 kubeadm.go:319] 
	I1211 00:07:27.784155   33275 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1211 00:07:27.784234   33275 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-786978 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-786978 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000901162s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1211 00:07:27.784336   33275 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:07:28.208715   33275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:07:28.221593   33275 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:07:28.221650   33275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:07:28.229496   33275 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:07:28.229507   33275 kubeadm.go:158] found existing configuration files:
	
	I1211 00:07:28.229556   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:07:28.237301   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:07:28.237353   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:07:28.244561   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:07:28.252105   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:07:28.252161   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:07:28.259621   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:07:28.267032   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:07:28.267086   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:07:28.274249   33275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:07:28.281656   33275 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:07:28.281725   33275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:07:28.289228   33275 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:07:28.332746   33275 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:07:28.333058   33275 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:07:28.410307   33275 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:07:28.410397   33275 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:07:28.410443   33275 kubeadm.go:319] OS: Linux
	I1211 00:07:28.410487   33275 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:07:28.410542   33275 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:07:28.410594   33275 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:07:28.410642   33275 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:07:28.410700   33275 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:07:28.410757   33275 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:07:28.410809   33275 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:07:28.410856   33275 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:07:28.410916   33275 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:07:28.479689   33275 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:07:28.479803   33275 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:07:28.479915   33275 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:07:28.487527   33275 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:07:28.491261   33275 out.go:252]   - Generating certificates and keys ...
	I1211 00:07:28.491442   33275 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:07:28.491515   33275 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:07:28.491596   33275 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:07:28.491662   33275 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:07:28.491736   33275 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:07:28.491838   33275 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:07:28.491909   33275 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:07:28.492314   33275 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:07:28.492629   33275 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:07:28.492944   33275 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:07:28.493146   33275 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:07:28.493210   33275 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:07:28.577292   33275 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:07:28.781768   33275 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:07:29.034741   33275 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:07:29.276651   33275 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:07:29.382597   33275 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:07:29.383160   33275 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:07:29.385818   33275 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:07:29.390854   33275 out.go:252]   - Booting up control plane ...
	I1211 00:07:29.390954   33275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:07:29.391047   33275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:07:29.392384   33275 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:07:29.406570   33275 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:07:29.406930   33275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:07:29.414726   33275 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:07:29.415243   33275 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:07:29.415465   33275 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:07:29.547413   33275 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:07:29.547524   33275 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:11:29.546951   33275 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001258034s
	I1211 00:11:29.546988   33275 kubeadm.go:319] 
	I1211 00:11:29.547044   33275 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:11:29.547076   33275 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:11:29.547178   33275 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:11:29.547181   33275 kubeadm.go:319] 
	I1211 00:11:29.547284   33275 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:11:29.547314   33275 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:11:29.547343   33275 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:11:29.547346   33275 kubeadm.go:319] 
	I1211 00:11:29.551251   33275 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:11:29.551701   33275 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:11:29.551820   33275 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:11:29.552081   33275 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1211 00:11:29.552093   33275 kubeadm.go:319] 
	I1211 00:11:29.552226   33275 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:11:29.552238   33275 kubeadm.go:403] duration metric: took 8m5.937924116s to StartCluster
	I1211 00:11:29.552273   33275 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:11:29.552333   33275 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:11:29.581433   33275 cri.go:89] found id: ""
	I1211 00:11:29.581456   33275 logs.go:282] 0 containers: []
	W1211 00:11:29.581462   33275 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:11:29.581467   33275 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:11:29.581523   33275 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:11:29.609724   33275 cri.go:89] found id: ""
	I1211 00:11:29.609738   33275 logs.go:282] 0 containers: []
	W1211 00:11:29.609745   33275 logs.go:284] No container was found matching "etcd"
	I1211 00:11:29.609750   33275 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:11:29.609814   33275 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:11:29.635740   33275 cri.go:89] found id: ""
	I1211 00:11:29.635753   33275 logs.go:282] 0 containers: []
	W1211 00:11:29.635760   33275 logs.go:284] No container was found matching "coredns"
	I1211 00:11:29.635765   33275 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:11:29.635824   33275 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:11:29.665894   33275 cri.go:89] found id: ""
	I1211 00:11:29.665909   33275 logs.go:282] 0 containers: []
	W1211 00:11:29.665916   33275 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:11:29.665921   33275 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:11:29.665980   33275 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:11:29.694572   33275 cri.go:89] found id: ""
	I1211 00:11:29.694586   33275 logs.go:282] 0 containers: []
	W1211 00:11:29.694594   33275 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:11:29.694598   33275 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:11:29.694657   33275 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:11:29.725530   33275 cri.go:89] found id: ""
	I1211 00:11:29.725545   33275 logs.go:282] 0 containers: []
	W1211 00:11:29.725551   33275 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:11:29.725556   33275 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:11:29.725614   33275 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:11:29.754032   33275 cri.go:89] found id: ""
	I1211 00:11:29.754045   33275 logs.go:282] 0 containers: []
	W1211 00:11:29.754053   33275 logs.go:284] No container was found matching "kindnet"
	I1211 00:11:29.754060   33275 logs.go:123] Gathering logs for kubelet ...
	I1211 00:11:29.754070   33275 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:11:29.820704   33275 logs.go:123] Gathering logs for dmesg ...
	I1211 00:11:29.820721   33275 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:11:29.832205   33275 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:11:29.832219   33275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:11:29.895873   33275 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:11:29.887206    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.887873    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.889591    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.890267    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.891873    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:11:29.887206    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.887873    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.889591    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.890267    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:29.891873    4854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:11:29.895893   33275 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:11:29.895904   33275 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:11:29.927538   33275 logs.go:123] Gathering logs for container status ...
	I1211 00:11:29.927554   33275 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1211 00:11:29.957723   33275 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001258034s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1211 00:11:29.957771   33275 out.go:285] * 
	W1211 00:11:29.957848   33275 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001258034s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:11:29.957916   33275 out.go:285] * 
	W1211 00:11:29.960083   33275 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:11:29.965683   33275 out.go:203] 
	W1211 00:11:29.968506   33275 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001258034s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:11:29.968551   33275 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1211 00:11:29.968568   33275 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1211 00:11:29.971766   33275 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.987893304Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.98805005Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.988119057Z" level=info msg="Create NRI interface"
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.988223027Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.988238462Z" level=info msg="runtime interface created"
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.988251598Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.988258975Z" level=info msg="runtime interface starting up..."
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.988267582Z" level=info msg="starting plugins..."
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.98828204Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:03:21 functional-786978 crio[842]: time="2025-12-11T00:03:21.988343605Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:03:21 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 00:03:23 functional-786978 crio[842]: time="2025-12-11T00:03:23.924719961Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=27410bc2-61c0-419a-a2a5-8ed544ce74e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:03:23 functional-786978 crio[842]: time="2025-12-11T00:03:23.925928904Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=73255dcf-4a7b-46b0-8096-1424b350dac6 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:03:23 functional-786978 crio[842]: time="2025-12-11T00:03:23.926552552Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e3d8b432-60b3-4a9e-95cf-6b3379656b23 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:03:23 functional-786978 crio[842]: time="2025-12-11T00:03:23.927258531Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=9367b4a1-bdef-432a-908d-2834b346bb4a name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:03:23 functional-786978 crio[842]: time="2025-12-11T00:03:23.927958004Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f3944459-2e14-4049-916c-f571d6a266fb name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:03:23 functional-786978 crio[842]: time="2025-12-11T00:03:23.928547066Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=60441eee-bbac-463c-bd4c-a999122688b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:03:23 functional-786978 crio[842]: time="2025-12-11T00:03:23.929159037Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4baf589f-b038-4a07-a498-11b27b1cc67f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:07:28 functional-786978 crio[842]: time="2025-12-11T00:07:28.482764787Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=de909ea7-f555-48c3-ae5b-d11772d00962 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:07:28 functional-786978 crio[842]: time="2025-12-11T00:07:28.483607641Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f0eb0249-f1a5-4d24-89cc-3738e6c31011 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:07:28 functional-786978 crio[842]: time="2025-12-11T00:07:28.484213187Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=652580b9-a2e4-4b18-8ca0-bf7e6d1b3649 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:07:28 functional-786978 crio[842]: time="2025-12-11T00:07:28.484755636Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=70c92215-fdd6-475b-819d-a927278ab594 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:07:28 functional-786978 crio[842]: time="2025-12-11T00:07:28.48530007Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=5aa90478-83dd-4dd5-ba09-6d445d4ae296 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:07:28 functional-786978 crio[842]: time="2025-12-11T00:07:28.485718056Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c8b3412d-45c0-49c5-a625-7132a2ae23a0 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:07:28 functional-786978 crio[842]: time="2025-12-11T00:07:28.486189743Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=14c21e21-ae66-4763-8032-9ded9745d5f3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:11:30.953095    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:30.953763    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:30.955471    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:30.955994    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:11:30.957617    4975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:11:30 up 22 min,  0 user,  load average: 0.19, 0.45, 0.61
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:11:28 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:11:28 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 11 00:11:28 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:11:28 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:11:28 functional-786978 kubelet[4780]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:11:28 functional-786978 kubelet[4780]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:11:28 functional-786978 kubelet[4780]: E1211 00:11:28.952061    4780 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:11:28 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:11:28 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:11:29 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 11 00:11:29 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:11:29 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:11:29 functional-786978 kubelet[4814]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:11:29 functional-786978 kubelet[4814]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:11:29 functional-786978 kubelet[4814]: E1211 00:11:29.712451    4814 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:11:29 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:11:29 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:11:30 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 11 00:11:30 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:11:30 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:11:30 functional-786978 kubelet[4887]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:11:30 functional-786978 kubelet[4887]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:11:30 functional-786978 kubelet[4887]: E1211 00:11:30.473313    4887 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:11:30 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:11:30 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 6 (429.154203ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 00:11:31.501872   39055 status.go:458] kubeconfig endpoint: get endpoint: "functional-786978" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1211 00:11:31.516167    4875 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-786978 --alsologtostderr -v=8
E1211 00:12:21.216246    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:12:48.919026    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:15:09.648355    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:16:32.722947    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:17:21.216390    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-786978 --alsologtostderr -v=8: exit status 80 (6m5.907855221s)

                                                
                                                
-- stdout --
	* [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:11:31.563230   39129 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:11:31.563658   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563678   39129 out.go:374] Setting ErrFile to fd 2...
	I1211 00:11:31.563685   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563986   39129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:11:31.564407   39129 out.go:368] Setting JSON to false
	I1211 00:11:31.565211   39129 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1378,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:11:31.565283   39129 start.go:143] virtualization:  
	I1211 00:11:31.568710   39129 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:11:31.572525   39129 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:11:31.572647   39129 notify.go:221] Checking for updates...
	I1211 00:11:31.578309   39129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:11:31.581264   39129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:31.584071   39129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:11:31.586801   39129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:11:31.589632   39129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:11:31.593067   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:31.593203   39129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:11:31.624525   39129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:11:31.624640   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.680227   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.670392474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.680335   39129 docker.go:319] overlay module found
	I1211 00:11:31.683507   39129 out.go:179] * Using the docker driver based on existing profile
	I1211 00:11:31.686334   39129 start.go:309] selected driver: docker
	I1211 00:11:31.686351   39129 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.686457   39129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:11:31.686564   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.744265   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.73545255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.744665   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:31.744728   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:31.744781   39129 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.747938   39129 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:11:31.750895   39129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:11:31.753857   39129 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:11:31.756592   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:31.756636   39129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:11:31.756650   39129 cache.go:65] Caching tarball of preloaded images
	I1211 00:11:31.756687   39129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:11:31.756736   39129 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:11:31.756746   39129 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:11:31.756847   39129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:11:31.775263   39129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:11:31.775283   39129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:11:31.775304   39129 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:11:31.775335   39129 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:11:31.775391   39129 start.go:364] duration metric: took 34.412µs to acquireMachinesLock for "functional-786978"
	I1211 00:11:31.775414   39129 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:11:31.775420   39129 fix.go:54] fixHost starting: 
	I1211 00:11:31.775679   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:31.791888   39129 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:11:31.791920   39129 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:11:31.795111   39129 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:11:31.795143   39129 machine.go:94] provisionDockerMachine start ...
	I1211 00:11:31.795229   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.811419   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.811754   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.811770   39129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:11:31.962366   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:31.962392   39129 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:11:31.962456   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.979928   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.980236   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.980251   39129 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:11:32.139976   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:32.140054   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.158886   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.159253   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.159279   39129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:11:32.307553   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:11:32.307588   39129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:11:32.307609   39129 ubuntu.go:190] setting up certificates
	I1211 00:11:32.307618   39129 provision.go:84] configureAuth start
	I1211 00:11:32.307677   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:32.326881   39129 provision.go:143] copyHostCerts
	I1211 00:11:32.326928   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.326981   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:11:32.326990   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.327094   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:11:32.327189   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327219   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:11:32.327229   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327259   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:11:32.327306   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327328   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:11:32.327337   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327369   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:11:32.327438   39129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:11:32.651770   39129 provision.go:177] copyRemoteCerts
	I1211 00:11:32.651883   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:11:32.651966   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.672496   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:32.786699   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 00:11:32.786771   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:11:32.804288   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 00:11:32.804348   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:11:32.822111   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 00:11:32.822172   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 00:11:32.839310   39129 provision.go:87] duration metric: took 531.679958ms to configureAuth
	I1211 00:11:32.839337   39129 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:11:32.839540   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:32.839656   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.857209   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.857554   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.857577   39129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:11:33.187304   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:11:33.187369   39129 machine.go:97] duration metric: took 1.392217167s to provisionDockerMachine
	I1211 00:11:33.187397   39129 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:11:33.187428   39129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:11:33.187507   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:11:33.187571   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.206116   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.310766   39129 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:11:33.313950   39129 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1211 00:11:33.313971   39129 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1211 00:11:33.313977   39129 command_runner.go:130] > VERSION_ID="12"
	I1211 00:11:33.313982   39129 command_runner.go:130] > VERSION="12 (bookworm)"
	I1211 00:11:33.313987   39129 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1211 00:11:33.313990   39129 command_runner.go:130] > ID=debian
	I1211 00:11:33.313995   39129 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1211 00:11:33.314000   39129 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1211 00:11:33.314006   39129 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1211 00:11:33.314074   39129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:11:33.314099   39129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:11:33.314110   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:11:33.314165   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:11:33.314254   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:11:33.314265   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /etc/ssl/certs/48752.pem
	I1211 00:11:33.314342   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:11:33.314349   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> /etc/test/nested/copy/4875/hosts
	I1211 00:11:33.314395   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:11:33.321833   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:33.338845   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:11:33.355788   39129 start.go:296] duration metric: took 168.358579ms for postStartSetup
	I1211 00:11:33.355933   39129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:11:33.355981   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.374136   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.483570   39129 command_runner.go:130] > 14%
	I1211 00:11:33.484133   39129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:11:33.488331   39129 command_runner.go:130] > 168G
	I1211 00:11:33.488874   39129 fix.go:56] duration metric: took 1.713448769s for fixHost
	I1211 00:11:33.488896   39129 start.go:83] releasing machines lock for "functional-786978", held for 1.713491657s
	I1211 00:11:33.488966   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:33.505970   39129 ssh_runner.go:195] Run: cat /version.json
	I1211 00:11:33.506004   39129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:11:33.506020   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.506067   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.524523   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.532688   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.712031   39129 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1211 00:11:33.714840   39129 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1211 00:11:33.715004   39129 ssh_runner.go:195] Run: systemctl --version
	I1211 00:11:33.720988   39129 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1211 00:11:33.721023   39129 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1211 00:11:33.721418   39129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:11:33.758142   39129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1211 00:11:33.762640   39129 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1211 00:11:33.762695   39129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:11:33.762759   39129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:11:33.770580   39129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:11:33.770605   39129 start.go:496] detecting cgroup driver to use...
	I1211 00:11:33.770636   39129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:11:33.770683   39129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:11:33.785751   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:11:33.798781   39129 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:11:33.798859   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:11:33.814594   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:11:33.828060   39129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:11:33.939426   39129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:11:34.063996   39129 docker.go:234] disabling docker service ...
	I1211 00:11:34.064079   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:11:34.088847   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:11:34.106427   39129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:11:34.233444   39129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:11:34.359250   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:11:34.371772   39129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:11:34.384768   39129 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1211 00:11:34.385910   39129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:11:34.386015   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.395329   39129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:11:34.395408   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.404378   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.412986   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.421585   39129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:11:34.429722   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.438361   39129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.447060   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.456153   39129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:11:34.462793   39129 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1211 00:11:34.463922   39129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:11:34.471096   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:34.576052   39129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:11:34.729272   39129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:11:34.729346   39129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:11:34.732930   39129 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1211 00:11:34.732954   39129 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1211 00:11:34.732962   39129 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1211 00:11:34.732969   39129 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:34.732973   39129 command_runner.go:130] > Access: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732985   39129 command_runner.go:130] > Modify: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732992   39129 command_runner.go:130] > Change: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732995   39129 command_runner.go:130] >  Birth: -
	I1211 00:11:34.733171   39129 start.go:564] Will wait 60s for crictl version
	I1211 00:11:34.733232   39129 ssh_runner.go:195] Run: which crictl
	I1211 00:11:34.736601   39129 command_runner.go:130] > /usr/local/bin/crictl
	I1211 00:11:34.736687   39129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:11:34.757793   39129 command_runner.go:130] > Version:  0.1.0
	I1211 00:11:34.757906   39129 command_runner.go:130] > RuntimeName:  cri-o
	I1211 00:11:34.757921   39129 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1211 00:11:34.757928   39129 command_runner.go:130] > RuntimeApiVersion:  v1
	I1211 00:11:34.760151   39129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:11:34.760230   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.787961   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.787986   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.787993   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.787998   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.788005   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.788009   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.788013   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.788019   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.788024   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.788028   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.788035   39129 command_runner.go:130] >      static
	I1211 00:11:34.788039   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.788043   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.788051   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.788055   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.788058   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.788069   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.788074   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.788080   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.788088   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.789644   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.815359   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.815385   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.815392   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.815397   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.815402   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.815425   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.815432   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.815439   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.815448   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.815452   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.815456   39129 command_runner.go:130] >      static
	I1211 00:11:34.815460   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.815468   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.815473   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.815480   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.815484   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.815491   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.815496   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.815505   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.815512   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.822208   39129 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:11:34.825193   39129 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:11:34.839960   39129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:11:34.843868   39129 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1211 00:11:34.843970   39129 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:11:34.844072   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:34.844127   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.876890   39129 command_runner.go:130] > {
	I1211 00:11:34.876911   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.876915   39129 command_runner.go:130] >     {
	I1211 00:11:34.876923   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.876928   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.876934   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.876937   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876941   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.876951   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.876963   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.876967   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876971   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.876979   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.876984   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.876987   39129 command_runner.go:130] >     },
	I1211 00:11:34.876991   39129 command_runner.go:130] >     {
	I1211 00:11:34.876997   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.877005   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877011   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.877014   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877018   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877026   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.877038   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.877042   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877046   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.877053   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877060   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877067   39129 command_runner.go:130] >     },
	I1211 00:11:34.877070   39129 command_runner.go:130] >     {
	I1211 00:11:34.877077   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.877089   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877094   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.877098   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877113   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877124   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.877132   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.877139   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877143   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.877147   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.877151   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877154   39129 command_runner.go:130] >     },
	I1211 00:11:34.877158   39129 command_runner.go:130] >     {
	I1211 00:11:34.877165   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.877171   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877176   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.877180   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877186   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877194   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.877204   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.877211   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877216   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.877219   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877224   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877234   39129 command_runner.go:130] >       },
	I1211 00:11:34.877242   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877253   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877257   39129 command_runner.go:130] >     },
	I1211 00:11:34.877260   39129 command_runner.go:130] >     {
	I1211 00:11:34.877267   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.877271   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877280   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.877287   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877291   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877299   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.877309   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.877317   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877326   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.877334   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877343   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877347   39129 command_runner.go:130] >       },
	I1211 00:11:34.877351   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877359   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877363   39129 command_runner.go:130] >     },
	I1211 00:11:34.877367   39129 command_runner.go:130] >     {
	I1211 00:11:34.877374   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.877381   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877387   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.877390   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877394   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877411   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.877420   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.877426   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877430   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.877434   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877438   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877441   39129 command_runner.go:130] >       },
	I1211 00:11:34.877445   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877450   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877455   39129 command_runner.go:130] >     },
	I1211 00:11:34.877459   39129 command_runner.go:130] >     {
	I1211 00:11:34.877473   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.877476   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.877490   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877494   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877502   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.877512   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.877516   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877520   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.877527   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877534   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877538   39129 command_runner.go:130] >     },
	I1211 00:11:34.877550   39129 command_runner.go:130] >     {
	I1211 00:11:34.877556   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.877560   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877565   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.877571   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877575   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877582   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.877602   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.877606   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877614   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.877618   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877630   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877633   39129 command_runner.go:130] >       },
	I1211 00:11:34.877636   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877640   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877646   39129 command_runner.go:130] >     },
	I1211 00:11:34.877649   39129 command_runner.go:130] >     {
	I1211 00:11:34.877656   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.877662   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877667   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.877670   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877674   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877681   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.877695   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.877699   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877703   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.877707   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877714   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.877717   39129 command_runner.go:130] >       },
	I1211 00:11:34.877721   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877732   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.877738   39129 command_runner.go:130] >     }
	I1211 00:11:34.877741   39129 command_runner.go:130] >   ]
	I1211 00:11:34.877744   39129 command_runner.go:130] > }
	I1211 00:11:34.877906   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.877920   39129 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:11:34.877980   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.904837   39129 command_runner.go:130] > {
	I1211 00:11:34.904873   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.904879   39129 command_runner.go:130] >     {
	I1211 00:11:34.904887   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.904893   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904899   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.904903   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904925   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.904940   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.904949   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.904958   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904962   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.904966   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.904971   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.904975   39129 command_runner.go:130] >     },
	I1211 00:11:34.904978   39129 command_runner.go:130] >     {
	I1211 00:11:34.904985   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.904989   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904999   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.905010   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905015   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905023   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.905032   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.905038   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905042   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.905046   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905054   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905064   39129 command_runner.go:130] >     },
	I1211 00:11:34.905068   39129 command_runner.go:130] >     {
	I1211 00:11:34.905075   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.905079   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905084   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.905090   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905095   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905103   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.905113   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.905121   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905126   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.905130   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.905134   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905143   39129 command_runner.go:130] >     },
	I1211 00:11:34.905146   39129 command_runner.go:130] >     {
	I1211 00:11:34.905153   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.905162   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905167   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.905170   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905175   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905182   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.905192   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.905195   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905199   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.905209   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905217   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905228   39129 command_runner.go:130] >       },
	I1211 00:11:34.905237   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905244   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905248   39129 command_runner.go:130] >     },
	I1211 00:11:34.905251   39129 command_runner.go:130] >     {
	I1211 00:11:34.905258   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.905262   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905267   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.905272   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905276   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905284   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.905295   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.905302   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905306   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.905310   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905315   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905322   39129 command_runner.go:130] >       },
	I1211 00:11:34.905326   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905330   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905334   39129 command_runner.go:130] >     },
	I1211 00:11:34.905337   39129 command_runner.go:130] >     {
	I1211 00:11:34.905351   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.905355   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905361   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.905368   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905378   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905391   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.905400   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.905408   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905413   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.905417   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905424   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905431   39129 command_runner.go:130] >       },
	I1211 00:11:34.905435   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905439   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905441   39129 command_runner.go:130] >     },
	I1211 00:11:34.905444   39129 command_runner.go:130] >     {
	I1211 00:11:34.905451   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.905457   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905463   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.905466   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905470   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.905492   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.905496   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905500   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.905509   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905513   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905516   39129 command_runner.go:130] >     },
	I1211 00:11:34.905519   39129 command_runner.go:130] >     {
	I1211 00:11:34.905526   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.905535   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905541   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.905544   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905548   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905556   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.905573   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.905577   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905581   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.905585   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905589   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905592   39129 command_runner.go:130] >       },
	I1211 00:11:34.905596   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905604   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905612   39129 command_runner.go:130] >     },
	I1211 00:11:34.905619   39129 command_runner.go:130] >     {
	I1211 00:11:34.905625   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.905629   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905634   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.905637   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905641   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905657   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.905665   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.905671   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905675   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.905679   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905683   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.905686   39129 command_runner.go:130] >       },
	I1211 00:11:34.905690   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905697   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.905700   39129 command_runner.go:130] >     }
	I1211 00:11:34.905703   39129 command_runner.go:130] >   ]
	I1211 00:11:34.905705   39129 command_runner.go:130] > }
	I1211 00:11:34.908324   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.908347   39129 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:11:34.908354   39129 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:11:34.908461   39129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:11:34.908543   39129 ssh_runner.go:195] Run: crio config
	I1211 00:11:34.971791   39129 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1211 00:11:34.971813   39129 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1211 00:11:34.971821   39129 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1211 00:11:34.971824   39129 command_runner.go:130] > #
	I1211 00:11:34.971832   39129 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1211 00:11:34.971839   39129 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1211 00:11:34.971846   39129 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1211 00:11:34.971853   39129 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1211 00:11:34.971857   39129 command_runner.go:130] > # reload'.
	I1211 00:11:34.971875   39129 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1211 00:11:34.971882   39129 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1211 00:11:34.971888   39129 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1211 00:11:34.971894   39129 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1211 00:11:34.971898   39129 command_runner.go:130] > [crio]
	I1211 00:11:34.971903   39129 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1211 00:11:34.971908   39129 command_runner.go:130] > # containers images, in this directory.
	I1211 00:11:34.972453   39129 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1211 00:11:34.972468   39129 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1211 00:11:34.973023   39129 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1211 00:11:34.973035   39129 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1211 00:11:34.973741   39129 command_runner.go:130] > # imagestore = ""
	I1211 00:11:34.973760   39129 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1211 00:11:34.973768   39129 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1211 00:11:34.973950   39129 command_runner.go:130] > # storage_driver = "overlay"
	I1211 00:11:34.973965   39129 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1211 00:11:34.973972   39129 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1211 00:11:34.974083   39129 command_runner.go:130] > # storage_option = [
	I1211 00:11:34.974240   39129 command_runner.go:130] > # ]
	I1211 00:11:34.974255   39129 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1211 00:11:34.974262   39129 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1211 00:11:34.974433   39129 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1211 00:11:34.974477   39129 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1211 00:11:34.974487   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1211 00:11:34.974492   39129 command_runner.go:130] > # always happen on a node reboot
	I1211 00:11:34.974707   39129 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1211 00:11:34.974755   39129 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1211 00:11:34.974769   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1211 00:11:34.974774   39129 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1211 00:11:34.974951   39129 command_runner.go:130] > # version_file_persist = ""
	I1211 00:11:34.974999   39129 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1211 00:11:34.975014   39129 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1211 00:11:34.975286   39129 command_runner.go:130] > # internal_wipe = true
	I1211 00:11:34.975303   39129 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1211 00:11:34.975309   39129 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1211 00:11:34.975533   39129 command_runner.go:130] > # internal_repair = true
	I1211 00:11:34.975547   39129 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1211 00:11:34.975554   39129 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1211 00:11:34.975560   39129 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1211 00:11:34.975800   39129 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1211 00:11:34.975813   39129 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1211 00:11:34.975817   39129 command_runner.go:130] > [crio.api]
	I1211 00:11:34.975838   39129 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1211 00:11:34.976047   39129 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1211 00:11:34.976068   39129 command_runner.go:130] > # IP address on which the stream server will listen.
	I1211 00:11:34.976289   39129 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1211 00:11:34.976305   39129 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1211 00:11:34.976322   39129 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1211 00:11:34.976522   39129 command_runner.go:130] > # stream_port = "0"
	I1211 00:11:34.976537   39129 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1211 00:11:34.976743   39129 command_runner.go:130] > # stream_enable_tls = false
	I1211 00:11:34.976759   39129 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1211 00:11:34.976966   39129 command_runner.go:130] > # stream_idle_timeout = ""
	I1211 00:11:34.976981   39129 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1211 00:11:34.976987   39129 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977102   39129 command_runner.go:130] > # stream_tls_cert = ""
	I1211 00:11:34.977116   39129 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1211 00:11:34.977122   39129 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977375   39129 command_runner.go:130] > # stream_tls_key = ""
	I1211 00:11:34.977408   39129 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1211 00:11:34.977433   39129 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1211 00:11:34.977440   39129 command_runner.go:130] > # automatically pick up the changes.
	I1211 00:11:34.977571   39129 command_runner.go:130] > # stream_tls_ca = ""
	I1211 00:11:34.977641   39129 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977779   39129 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1211 00:11:34.977797   39129 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977991   39129 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1211 00:11:34.978007   39129 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1211 00:11:34.978040   39129 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1211 00:11:34.978056   39129 command_runner.go:130] > [crio.runtime]
	I1211 00:11:34.978069   39129 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1211 00:11:34.978076   39129 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1211 00:11:34.978080   39129 command_runner.go:130] > # "nofile=1024:2048"
	I1211 00:11:34.978086   39129 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1211 00:11:34.978208   39129 command_runner.go:130] > # default_ulimits = [
	I1211 00:11:34.978352   39129 command_runner.go:130] > # ]
	I1211 00:11:34.978369   39129 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1211 00:11:34.978551   39129 command_runner.go:130] > # no_pivot = false
	I1211 00:11:34.978566   39129 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1211 00:11:34.978572   39129 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1211 00:11:34.978723   39129 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1211 00:11:34.978739   39129 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1211 00:11:34.978744   39129 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1211 00:11:34.978775   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.978921   39129 command_runner.go:130] > # conmon = ""
	I1211 00:11:34.978933   39129 command_runner.go:130] > # Cgroup setting for conmon
	I1211 00:11:34.978941   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1211 00:11:34.979286   39129 command_runner.go:130] > conmon_cgroup = "pod"
	I1211 00:11:34.979301   39129 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1211 00:11:34.979307   39129 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1211 00:11:34.979343   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.979348   39129 command_runner.go:130] > # conmon_env = [
	I1211 00:11:34.979496   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979512   39129 command_runner.go:130] > # Additional environment variables to set for all the
	I1211 00:11:34.979518   39129 command_runner.go:130] > # containers. These are overridden if set in the
	I1211 00:11:34.979524   39129 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1211 00:11:34.979552   39129 command_runner.go:130] > # default_env = [
	I1211 00:11:34.979707   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979725   39129 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1211 00:11:34.979734   39129 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1211 00:11:34.979983   39129 command_runner.go:130] > # selinux = false
	I1211 00:11:34.980000   39129 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1211 00:11:34.980009   39129 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1211 00:11:34.980015   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980366   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.980414   39129 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1211 00:11:34.980429   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980434   39129 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1211 00:11:34.980447   39129 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1211 00:11:34.980453   39129 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1211 00:11:34.980464   39129 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1211 00:11:34.980471   39129 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1211 00:11:34.980493   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980499   39129 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1211 00:11:34.980514   39129 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1211 00:11:34.980524   39129 command_runner.go:130] > # the cgroup blockio controller.
	I1211 00:11:34.980678   39129 command_runner.go:130] > # blockio_config_file = ""
	I1211 00:11:34.980713   39129 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1211 00:11:34.980723   39129 command_runner.go:130] > # blockio parameters.
	I1211 00:11:34.980981   39129 command_runner.go:130] > # blockio_reload = false
	I1211 00:11:34.980995   39129 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1211 00:11:34.980999   39129 command_runner.go:130] > # irqbalance daemon.
	I1211 00:11:34.981198   39129 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1211 00:11:34.981209   39129 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1211 00:11:34.981217   39129 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1211 00:11:34.981265   39129 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1211 00:11:34.981385   39129 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1211 00:11:34.981396   39129 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1211 00:11:34.981402   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.981515   39129 command_runner.go:130] > # rdt_config_file = ""
	I1211 00:11:34.981525   39129 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1211 00:11:34.981657   39129 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1211 00:11:34.981668   39129 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1211 00:11:34.981795   39129 command_runner.go:130] > # separate_pull_cgroup = ""
	I1211 00:11:34.981809   39129 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1211 00:11:34.981816   39129 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1211 00:11:34.981820   39129 command_runner.go:130] > # will be added.
	I1211 00:11:34.981926   39129 command_runner.go:130] > # default_capabilities = [
	I1211 00:11:34.982055   39129 command_runner.go:130] > # 	"CHOWN",
	I1211 00:11:34.982151   39129 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1211 00:11:34.982256   39129 command_runner.go:130] > # 	"FSETID",
	I1211 00:11:34.982350   39129 command_runner.go:130] > # 	"FOWNER",
	I1211 00:11:34.982451   39129 command_runner.go:130] > # 	"SETGID",
	I1211 00:11:34.982543   39129 command_runner.go:130] > # 	"SETUID",
	I1211 00:11:34.982687   39129 command_runner.go:130] > # 	"SETPCAP",
	I1211 00:11:34.982695   39129 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1211 00:11:34.982819   39129 command_runner.go:130] > # 	"KILL",
	I1211 00:11:34.982949   39129 command_runner.go:130] > # ]
	I1211 00:11:34.982960   39129 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1211 00:11:34.982993   39129 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1211 00:11:34.983107   39129 command_runner.go:130] > # add_inheritable_capabilities = false
	I1211 00:11:34.983118   39129 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1211 00:11:34.983132   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983136   39129 command_runner.go:130] > default_sysctls = [
	I1211 00:11:34.983272   39129 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1211 00:11:34.983279   39129 command_runner.go:130] > ]
	I1211 00:11:34.983285   39129 command_runner.go:130] > # List of devices on the host that a
	I1211 00:11:34.983300   39129 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1211 00:11:34.983304   39129 command_runner.go:130] > # allowed_devices = [
	I1211 00:11:34.983428   39129 command_runner.go:130] > # 	"/dev/fuse",
	I1211 00:11:34.983527   39129 command_runner.go:130] > # 	"/dev/net/tun",
	I1211 00:11:34.983650   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983660   39129 command_runner.go:130] > # List of additional devices. specified as
	I1211 00:11:34.983668   39129 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1211 00:11:34.983680   39129 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1211 00:11:34.983687   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983813   39129 command_runner.go:130] > # additional_devices = [
	I1211 00:11:34.983820   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983826   39129 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1211 00:11:34.983923   39129 command_runner.go:130] > # cdi_spec_dirs = [
	I1211 00:11:34.984053   39129 command_runner.go:130] > # 	"/etc/cdi",
	I1211 00:11:34.984060   39129 command_runner.go:130] > # 	"/var/run/cdi",
	I1211 00:11:34.984160   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984177   39129 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1211 00:11:34.984184   39129 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1211 00:11:34.984195   39129 command_runner.go:130] > # Defaults to false.
	I1211 00:11:34.984334   39129 command_runner.go:130] > # device_ownership_from_security_context = false
	I1211 00:11:34.984345   39129 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1211 00:11:34.984355   39129 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1211 00:11:34.984488   39129 command_runner.go:130] > # hooks_dir = [
	I1211 00:11:34.984640   39129 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1211 00:11:34.984647   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984653   39129 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1211 00:11:34.984667   39129 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1211 00:11:34.984672   39129 command_runner.go:130] > # its default mounts from the following two files:
	I1211 00:11:34.984675   39129 command_runner.go:130] > #
	I1211 00:11:34.984681   39129 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1211 00:11:34.984694   39129 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1211 00:11:34.984700   39129 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1211 00:11:34.984703   39129 command_runner.go:130] > #
	I1211 00:11:34.984710   39129 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1211 00:11:34.984716   39129 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1211 00:11:34.984722   39129 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1211 00:11:34.984727   39129 command_runner.go:130] > #      only add mounts it finds in this file.
	I1211 00:11:34.984729   39129 command_runner.go:130] > #
	I1211 00:11:34.984883   39129 command_runner.go:130] > # default_mounts_file = ""
	I1211 00:11:34.984900   39129 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1211 00:11:34.984908   39129 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1211 00:11:34.985051   39129 command_runner.go:130] > # pids_limit = -1
	I1211 00:11:34.985062   39129 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1211 00:11:34.985075   39129 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1211 00:11:34.985083   39129 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1211 00:11:34.985091   39129 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1211 00:11:34.985222   39129 command_runner.go:130] > # log_size_max = -1
	I1211 00:11:34.985233   39129 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1211 00:11:34.985372   39129 command_runner.go:130] > # log_to_journald = false
	I1211 00:11:34.985382   39129 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1211 00:11:34.985404   39129 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1211 00:11:34.985411   39129 command_runner.go:130] > # Path to directory for container attach sockets.
	I1211 00:11:34.985416   39129 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1211 00:11:34.985422   39129 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1211 00:11:34.985425   39129 command_runner.go:130] > # bind_mount_prefix = ""
	I1211 00:11:34.985434   39129 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1211 00:11:34.985569   39129 command_runner.go:130] > # read_only = false
	I1211 00:11:34.985580   39129 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1211 00:11:34.985587   39129 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1211 00:11:34.985601   39129 command_runner.go:130] > # live configuration reload.
	I1211 00:11:34.985605   39129 command_runner.go:130] > # log_level = "info"
	I1211 00:11:34.985611   39129 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1211 00:11:34.985616   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.985619   39129 command_runner.go:130] > # log_filter = ""
	I1211 00:11:34.985626   39129 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985632   39129 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1211 00:11:34.985635   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985643   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985647   39129 command_runner.go:130] > # uid_mappings = ""
	I1211 00:11:34.985654   39129 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985660   39129 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1211 00:11:34.985664   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985672   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985681   39129 command_runner.go:130] > # gid_mappings = ""
	I1211 00:11:34.985688   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1211 00:11:34.985694   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985700   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985708   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985712   39129 command_runner.go:130] > # minimum_mappable_uid = -1
	I1211 00:11:34.985718   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1211 00:11:34.985723   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985729   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985737   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985741   39129 command_runner.go:130] > # minimum_mappable_gid = -1
	I1211 00:11:34.985747   39129 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1211 00:11:34.985753   39129 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1211 00:11:34.985759   39129 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1211 00:11:34.985975   39129 command_runner.go:130] > # ctr_stop_timeout = 30
	I1211 00:11:34.985988   39129 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1211 00:11:34.985994   39129 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1211 00:11:34.985999   39129 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1211 00:11:34.986004   39129 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1211 00:11:34.986008   39129 command_runner.go:130] > # drop_infra_ctr = true
	I1211 00:11:34.986014   39129 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1211 00:11:34.986019   39129 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1211 00:11:34.986029   39129 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1211 00:11:34.986033   39129 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1211 00:11:34.986040   39129 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1211 00:11:34.986046   39129 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1211 00:11:34.986051   39129 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1211 00:11:34.986057   39129 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1211 00:11:34.986060   39129 command_runner.go:130] > # shared_cpuset = ""
	I1211 00:11:34.986066   39129 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1211 00:11:34.986071   39129 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1211 00:11:34.986075   39129 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1211 00:11:34.986082   39129 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1211 00:11:34.986085   39129 command_runner.go:130] > # pinns_path = ""
	I1211 00:11:34.986091   39129 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1211 00:11:34.986098   39129 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1211 00:11:34.986101   39129 command_runner.go:130] > # enable_criu_support = true
	I1211 00:11:34.986107   39129 command_runner.go:130] > # Enable/disable the generation of the container,
	I1211 00:11:34.986112   39129 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1211 00:11:34.986116   39129 command_runner.go:130] > # enable_pod_events = false
	I1211 00:11:34.986122   39129 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1211 00:11:34.986131   39129 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1211 00:11:34.986135   39129 command_runner.go:130] > # default_runtime = "crun"
	I1211 00:11:34.986140   39129 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1211 00:11:34.986148   39129 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1211 00:11:34.986159   39129 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1211 00:11:34.986164   39129 command_runner.go:130] > # creation as a file is not desired either.
	I1211 00:11:34.986172   39129 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1211 00:11:34.986177   39129 command_runner.go:130] > # the hostname is being managed dynamically.
	I1211 00:11:34.986181   39129 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1211 00:11:34.986185   39129 command_runner.go:130] > # ]
	I1211 00:11:34.986192   39129 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1211 00:11:34.986198   39129 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1211 00:11:34.986205   39129 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1211 00:11:34.986210   39129 command_runner.go:130] > # Each entry in the table should follow the format:
	I1211 00:11:34.986212   39129 command_runner.go:130] > #
	I1211 00:11:34.986217   39129 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1211 00:11:34.986221   39129 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1211 00:11:34.986226   39129 command_runner.go:130] > # runtime_type = "oci"
	I1211 00:11:34.986231   39129 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1211 00:11:34.986235   39129 command_runner.go:130] > # inherit_default_runtime = false
	I1211 00:11:34.986240   39129 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1211 00:11:34.986244   39129 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1211 00:11:34.986248   39129 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1211 00:11:34.986251   39129 command_runner.go:130] > # monitor_env = []
	I1211 00:11:34.986256   39129 command_runner.go:130] > # privileged_without_host_devices = false
	I1211 00:11:34.986259   39129 command_runner.go:130] > # allowed_annotations = []
	I1211 00:11:34.986265   39129 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1211 00:11:34.986268   39129 command_runner.go:130] > # no_sync_log = false
	I1211 00:11:34.986272   39129 command_runner.go:130] > # default_annotations = {}
	I1211 00:11:34.986276   39129 command_runner.go:130] > # stream_websockets = false
	I1211 00:11:34.986279   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.986309   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.986315   39129 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1211 00:11:34.986324   39129 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1211 00:11:34.986330   39129 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1211 00:11:34.986337   39129 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1211 00:11:34.986340   39129 command_runner.go:130] > #   in $PATH.
	I1211 00:11:34.986346   39129 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1211 00:11:34.986350   39129 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1211 00:11:34.986356   39129 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1211 00:11:34.986359   39129 command_runner.go:130] > #   state.
	I1211 00:11:34.986366   39129 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1211 00:11:34.986375   39129 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1211 00:11:34.986381   39129 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1211 00:11:34.986387   39129 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1211 00:11:34.986392   39129 command_runner.go:130] > #   the values from the default runtime on load time.
	I1211 00:11:34.986398   39129 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1211 00:11:34.986404   39129 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1211 00:11:34.986410   39129 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1211 00:11:34.986417   39129 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1211 00:11:34.986421   39129 command_runner.go:130] > #   The currently recognized values are:
	I1211 00:11:34.986428   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1211 00:11:34.986435   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1211 00:11:34.986440   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1211 00:11:34.986446   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1211 00:11:34.986455   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1211 00:11:34.986462   39129 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1211 00:11:34.986469   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1211 00:11:34.986475   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1211 00:11:34.986481   39129 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1211 00:11:34.986487   39129 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1211 00:11:34.986494   39129 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1211 00:11:34.986500   39129 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1211 00:11:34.986505   39129 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1211 00:11:34.986511   39129 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1211 00:11:34.986517   39129 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1211 00:11:34.986528   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1211 00:11:34.986534   39129 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1211 00:11:34.986538   39129 command_runner.go:130] > #   deprecated option "conmon".
	I1211 00:11:34.986545   39129 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1211 00:11:34.986550   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1211 00:11:34.986556   39129 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1211 00:11:34.986561   39129 command_runner.go:130] > #   should be moved to the container's cgroup
	I1211 00:11:34.986567   39129 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1211 00:11:34.986572   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1211 00:11:34.986579   39129 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1211 00:11:34.986583   39129 command_runner.go:130] > #   conmon-rs by using:
	I1211 00:11:34.986591   39129 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1211 00:11:34.986598   39129 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1211 00:11:34.986606   39129 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1211 00:11:34.986613   39129 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1211 00:11:34.986618   39129 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1211 00:11:34.986625   39129 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1211 00:11:34.986633   39129 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1211 00:11:34.986641   39129 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1211 00:11:34.986651   39129 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1211 00:11:34.986658   39129 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1211 00:11:34.986662   39129 command_runner.go:130] > #   when a machine crash happens.
	I1211 00:11:34.986669   39129 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1211 00:11:34.986677   39129 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1211 00:11:34.986685   39129 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1211 00:11:34.986689   39129 command_runner.go:130] > #   seccomp profile for the runtime.
	I1211 00:11:34.986695   39129 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1211 00:11:34.986702   39129 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1211 00:11:34.986704   39129 command_runner.go:130] > #
	I1211 00:11:34.986708   39129 command_runner.go:130] > # Using the seccomp notifier feature:
	I1211 00:11:34.986711   39129 command_runner.go:130] > #
	I1211 00:11:34.986717   39129 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1211 00:11:34.986724   39129 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1211 00:11:34.986729   39129 command_runner.go:130] > #
	I1211 00:11:34.986739   39129 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1211 00:11:34.986745   39129 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1211 00:11:34.986748   39129 command_runner.go:130] > #
	I1211 00:11:34.986754   39129 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1211 00:11:34.986757   39129 command_runner.go:130] > # feature.
	I1211 00:11:34.986760   39129 command_runner.go:130] > #
	I1211 00:11:34.986766   39129 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1211 00:11:34.986772   39129 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1211 00:11:34.986778   39129 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1211 00:11:34.986784   39129 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1211 00:11:34.986790   39129 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1211 00:11:34.986792   39129 command_runner.go:130] > #
	I1211 00:11:34.986799   39129 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1211 00:11:34.986805   39129 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1211 00:11:34.986808   39129 command_runner.go:130] > #
	I1211 00:11:34.986814   39129 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1211 00:11:34.986820   39129 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1211 00:11:34.986822   39129 command_runner.go:130] > #
	I1211 00:11:34.986828   39129 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1211 00:11:34.986833   39129 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1211 00:11:34.986837   39129 command_runner.go:130] > # limitation.
	I1211 00:11:34.986842   39129 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1211 00:11:34.986846   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1211 00:11:34.986850   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986853   39129 command_runner.go:130] > runtime_root = "/run/crun"
	I1211 00:11:34.986857   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986860   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986864   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.986868   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.986872   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.986876   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.986880   39129 command_runner.go:130] > allowed_annotations = [
	I1211 00:11:34.986887   39129 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1211 00:11:34.986889   39129 command_runner.go:130] > ]
	I1211 00:11:34.986894   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.986898   39129 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1211 00:11:34.986902   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1211 00:11:34.986906   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986909   39129 command_runner.go:130] > runtime_root = "/run/runc"
	I1211 00:11:34.986913   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986917   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986921   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.987106   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.987121   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.987127   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.987132   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.987139   39129 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1211 00:11:34.987147   39129 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1211 00:11:34.987154   39129 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1211 00:11:34.987166   39129 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1211 00:11:34.987177   39129 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1211 00:11:34.987187   39129 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1211 00:11:34.987194   39129 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1211 00:11:34.987200   39129 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1211 00:11:34.987209   39129 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1211 00:11:34.987218   39129 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1211 00:11:34.987224   39129 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1211 00:11:34.987231   39129 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1211 00:11:34.987235   39129 command_runner.go:130] > # Example:
	I1211 00:11:34.987241   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1211 00:11:34.987246   39129 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1211 00:11:34.987251   39129 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1211 00:11:34.987255   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1211 00:11:34.987258   39129 command_runner.go:130] > # cpuset = "0-1"
	I1211 00:11:34.987262   39129 command_runner.go:130] > # cpushares = "5"
	I1211 00:11:34.987269   39129 command_runner.go:130] > # cpuquota = "1000"
	I1211 00:11:34.987273   39129 command_runner.go:130] > # cpuperiod = "100000"
	I1211 00:11:34.987277   39129 command_runner.go:130] > # cpulimit = "35"
	I1211 00:11:34.987280   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.987284   39129 command_runner.go:130] > # The workload name is workload-type.
	I1211 00:11:34.987292   39129 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1211 00:11:34.987298   39129 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1211 00:11:34.987303   39129 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1211 00:11:34.987311   39129 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1211 00:11:34.987317   39129 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1211 00:11:34.987322   39129 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1211 00:11:34.987328   39129 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1211 00:11:34.987332   39129 command_runner.go:130] > # Default value is set to true
	I1211 00:11:34.987336   39129 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1211 00:11:34.987342   39129 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1211 00:11:34.987346   39129 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1211 00:11:34.987350   39129 command_runner.go:130] > # Default value is set to 'false'
	I1211 00:11:34.987355   39129 command_runner.go:130] > # disable_hostport_mapping = false
	I1211 00:11:34.987361   39129 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1211 00:11:34.987369   39129 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1211 00:11:34.987372   39129 command_runner.go:130] > # timezone = ""
	I1211 00:11:34.987379   39129 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1211 00:11:34.987382   39129 command_runner.go:130] > #
	I1211 00:11:34.987387   39129 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1211 00:11:34.987393   39129 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1211 00:11:34.987396   39129 command_runner.go:130] > [crio.image]
	I1211 00:11:34.987402   39129 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1211 00:11:34.987407   39129 command_runner.go:130] > # default_transport = "docker://"
	I1211 00:11:34.987413   39129 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1211 00:11:34.987419   39129 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987423   39129 command_runner.go:130] > # global_auth_file = ""
	I1211 00:11:34.987428   39129 command_runner.go:130] > # The image used to instantiate infra containers.
	I1211 00:11:34.987432   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987442   39129 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.987448   39129 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1211 00:11:34.987454   39129 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987458   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987463   39129 command_runner.go:130] > # pause_image_auth_file = ""
	I1211 00:11:34.987468   39129 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1211 00:11:34.987478   39129 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1211 00:11:34.987484   39129 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1211 00:11:34.987489   39129 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1211 00:11:34.987505   39129 command_runner.go:130] > # pause_command = "/pause"
	I1211 00:11:34.987511   39129 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1211 00:11:34.987518   39129 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1211 00:11:34.987524   39129 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1211 00:11:34.987530   39129 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1211 00:11:34.987536   39129 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1211 00:11:34.987542   39129 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1211 00:11:34.987545   39129 command_runner.go:130] > # pinned_images = [
	I1211 00:11:34.987549   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987555   39129 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1211 00:11:34.987561   39129 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1211 00:11:34.987567   39129 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1211 00:11:34.987574   39129 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1211 00:11:34.987579   39129 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1211 00:11:34.987584   39129 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1211 00:11:34.987589   39129 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1211 00:11:34.987596   39129 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1211 00:11:34.987602   39129 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1211 00:11:34.987608   39129 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1211 00:11:34.987614   39129 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1211 00:11:34.987618   39129 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1211 00:11:34.987624   39129 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1211 00:11:34.987631   39129 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1211 00:11:34.987634   39129 command_runner.go:130] > # changing them here.
	I1211 00:11:34.987643   39129 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1211 00:11:34.987646   39129 command_runner.go:130] > # insecure_registries = [
	I1211 00:11:34.987651   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987657   39129 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1211 00:11:34.987662   39129 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1211 00:11:34.987666   39129 command_runner.go:130] > # image_volumes = "mkdir"
	I1211 00:11:34.987671   39129 command_runner.go:130] > # Temporary directory to use for storing big files
	I1211 00:11:34.987675   39129 command_runner.go:130] > # big_files_temporary_dir = ""
	I1211 00:11:34.987681   39129 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1211 00:11:34.987688   39129 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1211 00:11:34.987692   39129 command_runner.go:130] > # auto_reload_registries = false
	I1211 00:11:34.987698   39129 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1211 00:11:34.987706   39129 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1211 00:11:34.987711   39129 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1211 00:11:34.987715   39129 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1211 00:11:34.987719   39129 command_runner.go:130] > # The mode of short name resolution.
	I1211 00:11:34.987726   39129 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1211 00:11:34.987734   39129 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1211 00:11:34.987739   39129 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1211 00:11:34.987743   39129 command_runner.go:130] > # short_name_mode = "enforcing"
	I1211 00:11:34.987749   39129 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1211 00:11:34.987754   39129 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1211 00:11:34.987763   39129 command_runner.go:130] > # oci_artifact_mount_support = true
	I1211 00:11:34.987770   39129 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1211 00:11:34.987773   39129 command_runner.go:130] > # CNI plugins.
	I1211 00:11:34.987776   39129 command_runner.go:130] > [crio.network]
	I1211 00:11:34.987782   39129 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1211 00:11:34.987787   39129 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1211 00:11:34.987791   39129 command_runner.go:130] > # cni_default_network = ""
	I1211 00:11:34.987797   39129 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1211 00:11:34.987801   39129 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1211 00:11:34.987806   39129 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1211 00:11:34.987809   39129 command_runner.go:130] > # plugin_dirs = [
	I1211 00:11:34.987816   39129 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1211 00:11:34.987819   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987823   39129 command_runner.go:130] > # List of included pod metrics.
	I1211 00:11:34.987827   39129 command_runner.go:130] > # included_pod_metrics = [
	I1211 00:11:34.987830   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987837   39129 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1211 00:11:34.987840   39129 command_runner.go:130] > [crio.metrics]
	I1211 00:11:34.987845   39129 command_runner.go:130] > # Globally enable or disable metrics support.
	I1211 00:11:34.987849   39129 command_runner.go:130] > # enable_metrics = false
	I1211 00:11:34.987853   39129 command_runner.go:130] > # Specify enabled metrics collectors.
	I1211 00:11:34.987859   39129 command_runner.go:130] > # Per default all metrics are enabled.
	I1211 00:11:34.987865   39129 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1211 00:11:34.987871   39129 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1211 00:11:34.987877   39129 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1211 00:11:34.987880   39129 command_runner.go:130] > # metrics_collectors = [
	I1211 00:11:34.987884   39129 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1211 00:11:34.987888   39129 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1211 00:11:34.987892   39129 command_runner.go:130] > # 	"containers_oom_total",
	I1211 00:11:34.987895   39129 command_runner.go:130] > # 	"processes_defunct",
	I1211 00:11:34.987900   39129 command_runner.go:130] > # 	"operations_total",
	I1211 00:11:34.987904   39129 command_runner.go:130] > # 	"operations_latency_seconds",
	I1211 00:11:34.987908   39129 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1211 00:11:34.987912   39129 command_runner.go:130] > # 	"operations_errors_total",
	I1211 00:11:34.987916   39129 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1211 00:11:34.987920   39129 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1211 00:11:34.987924   39129 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1211 00:11:34.987928   39129 command_runner.go:130] > # 	"image_pulls_success_total",
	I1211 00:11:34.987932   39129 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1211 00:11:34.987936   39129 command_runner.go:130] > # 	"containers_oom_count_total",
	I1211 00:11:34.987942   39129 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1211 00:11:34.987946   39129 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1211 00:11:34.987950   39129 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1211 00:11:34.987953   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987962   39129 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1211 00:11:34.987967   39129 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1211 00:11:34.987972   39129 command_runner.go:130] > # The port on which the metrics server will listen.
	I1211 00:11:34.987975   39129 command_runner.go:130] > # metrics_port = 9090
	I1211 00:11:34.987980   39129 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1211 00:11:34.987984   39129 command_runner.go:130] > # metrics_socket = ""
	I1211 00:11:34.987989   39129 command_runner.go:130] > # The certificate for the secure metrics server.
	I1211 00:11:34.987994   39129 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1211 00:11:34.988001   39129 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1211 00:11:34.988005   39129 command_runner.go:130] > # certificate on any modification event.
	I1211 00:11:34.988008   39129 command_runner.go:130] > # metrics_cert = ""
	I1211 00:11:34.988013   39129 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1211 00:11:34.988018   39129 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1211 00:11:34.988021   39129 command_runner.go:130] > # metrics_key = ""
	I1211 00:11:34.988026   39129 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1211 00:11:34.988030   39129 command_runner.go:130] > [crio.tracing]
	I1211 00:11:34.988035   39129 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1211 00:11:34.988038   39129 command_runner.go:130] > # enable_tracing = false
	I1211 00:11:34.988044   39129 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1211 00:11:34.988050   39129 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1211 00:11:34.988056   39129 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1211 00:11:34.988061   39129 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1211 00:11:34.988064   39129 command_runner.go:130] > # CRI-O NRI configuration.
	I1211 00:11:34.988067   39129 command_runner.go:130] > [crio.nri]
	I1211 00:11:34.988071   39129 command_runner.go:130] > # Globally enable or disable NRI.
	I1211 00:11:34.988075   39129 command_runner.go:130] > # enable_nri = true
	I1211 00:11:34.988079   39129 command_runner.go:130] > # NRI socket to listen on.
	I1211 00:11:34.988083   39129 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1211 00:11:34.988087   39129 command_runner.go:130] > # NRI plugin directory to use.
	I1211 00:11:34.988091   39129 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1211 00:11:34.988095   39129 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1211 00:11:34.988100   39129 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1211 00:11:34.988108   39129 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1211 00:11:34.988171   39129 command_runner.go:130] > # nri_disable_connections = false
	I1211 00:11:34.988177   39129 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1211 00:11:34.988182   39129 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1211 00:11:34.988186   39129 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1211 00:11:34.988190   39129 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1211 00:11:34.988194   39129 command_runner.go:130] > # NRI default validator configuration.
	I1211 00:11:34.988201   39129 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1211 00:11:34.988207   39129 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1211 00:11:34.988211   39129 command_runner.go:130] > # can be restricted/rejected:
	I1211 00:11:34.988215   39129 command_runner.go:130] > # - OCI hook injection
	I1211 00:11:34.988220   39129 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1211 00:11:34.988225   39129 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1211 00:11:34.988229   39129 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1211 00:11:34.988233   39129 command_runner.go:130] > # - adjustment of linux namespaces
	I1211 00:11:34.988240   39129 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1211 00:11:34.988246   39129 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1211 00:11:34.988251   39129 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1211 00:11:34.988254   39129 command_runner.go:130] > #
	I1211 00:11:34.988258   39129 command_runner.go:130] > # [crio.nri.default_validator]
	I1211 00:11:34.988262   39129 command_runner.go:130] > # nri_enable_default_validator = false
	I1211 00:11:34.988267   39129 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1211 00:11:34.988272   39129 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1211 00:11:34.988277   39129 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1211 00:11:34.988282   39129 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1211 00:11:34.988287   39129 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1211 00:11:34.988291   39129 command_runner.go:130] > # nri_validator_required_plugins = [
	I1211 00:11:34.988294   39129 command_runner.go:130] > # ]
	I1211 00:11:34.988299   39129 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1211 00:11:34.988306   39129 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1211 00:11:34.988309   39129 command_runner.go:130] > [crio.stats]
	I1211 00:11:34.988316   39129 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1211 00:11:34.988321   39129 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1211 00:11:34.988324   39129 command_runner.go:130] > # stats_collection_period = 0
	I1211 00:11:34.988334   39129 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1211 00:11:34.988341   39129 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1211 00:11:34.988345   39129 command_runner.go:130] > # collection_period = 0
	I1211 00:11:34.988741   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943588402Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1211 00:11:34.988759   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943910852Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1211 00:11:34.988775   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944105801Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1211 00:11:34.988788   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944281599Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1211 00:11:34.988804   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944534263Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.988813   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944919976Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1211 00:11:34.988827   39129 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1211 00:11:34.988906   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:34.988923   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:34.988942   39129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:11:34.988966   39129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:11:34.989098   39129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:11:34.989171   39129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:11:34.996103   39129 command_runner.go:130] > kubeadm
	I1211 00:11:34.996124   39129 command_runner.go:130] > kubectl
	I1211 00:11:34.996130   39129 command_runner.go:130] > kubelet
	I1211 00:11:34.996965   39129 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:11:34.997027   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:11:35.004524   39129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:11:35.022259   39129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:11:35.035877   39129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1211 00:11:35.049665   39129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:11:35.053270   39129 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1211 00:11:35.053410   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:35.173051   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:35.663593   39129 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:11:35.663611   39129 certs.go:195] generating shared ca certs ...
	I1211 00:11:35.663626   39129 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:35.663843   39129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:11:35.663918   39129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:11:35.664081   39129 certs.go:257] generating profile certs ...
	I1211 00:11:35.664282   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:11:35.664361   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:11:35.664489   39129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:11:35.664502   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 00:11:35.664555   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 00:11:35.664574   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 00:11:35.664591   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 00:11:35.664636   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 00:11:35.664653   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 00:11:35.664664   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 00:11:35.664675   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 00:11:35.664773   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:11:35.664811   39129 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:11:35.664825   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:11:35.664885   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:11:35.664944   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:11:35.664975   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:11:35.665087   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:35.665126   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem -> /usr/share/ca-certificates/4875.pem
	I1211 00:11:35.665138   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.665177   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.666144   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:11:35.692413   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:11:35.716263   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:11:35.735120   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:11:35.753386   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:11:35.771269   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:11:35.789331   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:11:35.806153   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:11:35.823663   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:11:35.840043   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:11:35.857281   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:11:35.874656   39129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:11:35.887595   39129 ssh_runner.go:195] Run: openssl version
	I1211 00:11:35.893373   39129 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1211 00:11:35.893766   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.901331   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:11:35.908770   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912293   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912332   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912381   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.953295   39129 command_runner.go:130] > 3ec20f2e
	I1211 00:11:35.953382   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:11:35.960497   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.967487   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:11:35.974778   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978822   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978856   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978928   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:36.019575   39129 command_runner.go:130] > b5213941
	I1211 00:11:36.020060   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:11:36.028538   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.036748   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:11:36.045277   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049492   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049553   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049672   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.092814   39129 command_runner.go:130] > 51391683
	I1211 00:11:36.093356   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:11:36.101223   39129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105165   39129 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105191   39129 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1211 00:11:36.105198   39129 command_runner.go:130] > Device: 259,1	Inode: 1312480     Links: 1
	I1211 00:11:36.105205   39129 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:36.105212   39129 command_runner.go:130] > Access: 2025-12-11 00:07:28.485872476 +0000
	I1211 00:11:36.105217   39129 command_runner.go:130] > Modify: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105222   39129 command_runner.go:130] > Change: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105228   39129 command_runner.go:130] >  Birth: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105288   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:11:36.146158   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.146663   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:11:36.187479   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.187576   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:11:36.228130   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.228568   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:11:36.269072   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.269532   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:11:36.310317   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.310832   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:11:36.353606   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.354067   39129 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:36.354163   39129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:11:36.354246   39129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:11:36.382480   39129 cri.go:89] found id: ""
	I1211 00:11:36.382557   39129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:11:36.389756   39129 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1211 00:11:36.389777   39129 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1211 00:11:36.389784   39129 command_runner.go:130] > /var/lib/minikube/etcd:
	I1211 00:11:36.390708   39129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:11:36.390737   39129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:11:36.390806   39129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:11:36.398342   39129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:11:36.398732   39129 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-786978" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.398833   39129 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-2739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-786978" cluster setting kubeconfig missing "functional-786978" context setting]
	I1211 00:11:36.399137   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.399560   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.399714   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.400253   39129 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 00:11:36.400273   39129 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 00:11:36.400281   39129 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 00:11:36.400286   39129 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 00:11:36.400291   39129 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 00:11:36.400594   39129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:11:36.400697   39129 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1211 00:11:36.409983   39129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1211 00:11:36.410015   39129 kubeadm.go:602] duration metric: took 19.271635ms to restartPrimaryControlPlane
	I1211 00:11:36.410025   39129 kubeadm.go:403] duration metric: took 55.966406ms to StartCluster
	I1211 00:11:36.410041   39129 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410105   39129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.410754   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410951   39129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 00:11:36.411375   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:36.411428   39129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 00:11:36.411496   39129 addons.go:70] Setting storage-provisioner=true in profile "functional-786978"
	I1211 00:11:36.411509   39129 addons.go:239] Setting addon storage-provisioner=true in "functional-786978"
	I1211 00:11:36.411539   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.412103   39129 addons.go:70] Setting default-storageclass=true in profile "functional-786978"
	I1211 00:11:36.412128   39129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-786978"
	I1211 00:11:36.412372   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.412555   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.416027   39129 out.go:179] * Verifying Kubernetes components...
	I1211 00:11:36.418962   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:36.445616   39129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 00:11:36.448584   39129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.448615   39129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 00:11:36.448687   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.455632   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.455806   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.456398   39129 addons.go:239] Setting addon default-storageclass=true in "functional-786978"
	I1211 00:11:36.456432   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.459345   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.488078   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.511255   39129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:36.511282   39129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 00:11:36.511350   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.540894   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.608214   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:36.665748   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.679982   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.404051   39129 node_ready.go:35] waiting up to 6m0s for node "functional-786978" to be "Ready" ...
	I1211 00:11:37.404239   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.404634   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404742   39129 retry.go:31] will retry after 310.125043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404824   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404858   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404893   39129 retry.go:31] will retry after 141.721995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404991   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:37.547464   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.613487   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.613562   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.613592   39129 retry.go:31] will retry after 561.758211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.715754   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:37.779510   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.779557   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.779585   39129 retry.go:31] will retry after 505.869102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.904810   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.904884   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.175539   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.243137   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.243185   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.243204   39129 retry.go:31] will retry after 361.539254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.286533   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:38.344606   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.348111   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.348157   39129 retry.go:31] will retry after 829.218438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.404431   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.404511   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.404881   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.605429   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.661283   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.664833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.664864   39129 retry.go:31] will retry after 800.266997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.905185   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.905301   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.905646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:39.178251   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:39.238429   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.238472   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.238493   39129 retry.go:31] will retry after 1.184749907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.405001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.405348   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:39.405424   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:39.465581   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:39.526474   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.526525   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.526544   39129 retry.go:31] will retry after 1.807004704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.905105   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.905423   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.405603   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.423936   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:40.495739   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:40.495794   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.495811   39129 retry.go:31] will retry after 1.404783651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.334388   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:41.396786   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.396852   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.396891   39129 retry.go:31] will retry after 1.10995967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.405068   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.405184   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.405534   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:41.405602   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:41.901437   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:41.905007   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.905077   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.905313   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.984043   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.984104   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.984123   39129 retry.go:31] will retry after 1.551735429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.404784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:42.507069   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:42.562010   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:42.565655   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.565695   39129 retry.go:31] will retry after 1.834850552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.904273   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.904413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.904767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.404422   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.536095   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:43.596578   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:43.596618   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.596641   39129 retry.go:31] will retry after 3.759083682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.905026   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.905109   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.905424   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:43.905474   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:44.401015   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:44.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.404608   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:44.466004   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:44.470131   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.470162   39129 retry.go:31] will retry after 3.734519465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.904450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.904746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.404448   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.404610   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.405391   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.905314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.905389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.905730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:45.905817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:46.404489   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.404597   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.404850   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:46.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.904888   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.905184   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.356864   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:47.404412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.420245   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:47.420295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.420315   39129 retry.go:31] will retry after 2.851566945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.904846   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.904912   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.905167   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:48.205865   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:48.269575   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:48.269614   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.269633   39129 retry.go:31] will retry after 3.250947796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.404858   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.404932   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.405259   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:48.405314   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:48.905121   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.905582   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.404258   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.404342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.272194   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:50.327238   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:50.331229   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.331261   39129 retry.go:31] will retry after 4.377849152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.404603   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.404681   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.404972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.904412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:50.904763   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:51.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.404469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:51.521211   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:51.575865   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:51.579753   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.579788   39129 retry.go:31] will retry after 10.380601314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.905566   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.405257   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.405613   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.904681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:53.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:53.404852   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:53.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.904440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.904804   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.404471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.404754   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.709241   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:54.767641   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:54.771055   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.771086   39129 retry.go:31] will retry after 5.957769887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.904303   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.904730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.404312   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.404383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.404693   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.904394   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:55.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:56.404616   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.404692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.405015   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:56.904919   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.904989   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.905263   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.405131   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.905419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.905761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:57.905821   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:58.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.404407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.404667   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:58.904372   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.404718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.904404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:00.404425   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.404531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.404943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:00.405022   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:00.729113   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:00.791242   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:00.794799   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.794830   39129 retry.go:31] will retry after 11.484844112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.905270   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.405214   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.405547   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.904696   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.904770   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.905114   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.961328   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:02.020749   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:02.024939   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.024971   39129 retry.go:31] will retry after 14.651232328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:02.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:02.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:03.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:03.904466   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.904548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.404457   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.404546   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.904381   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.904772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:04.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:05.404564   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.404650   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.405040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:05.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.404608   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.404684   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.405046   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.905071   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.905390   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:06.905442   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:07.405193   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.405265   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.405584   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:07.904280   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.904352   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.404398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.904498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:09.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.404791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:09.404848   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:09.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.404523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.904428   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.904505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.904831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:11.904892   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:12.280537   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:12.342793   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:12.342833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.342853   39129 retry.go:31] will retry after 23.205348466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.405205   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.405602   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:12.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.904717   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.404271   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.905297   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.905373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.905750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:13.905805   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:14.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:14.904352   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.904734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.904784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:16.404614   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.404686   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.405057   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:16.405114   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:16.676815   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:16.732715   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:16.736183   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.736213   39129 retry.go:31] will retry after 30.816141509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.404776   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.904286   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.904361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.904615   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.404395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.904448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.904755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:18.904810   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:19.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.404533   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:19.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.904394   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.904694   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:21.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.404473   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:21.404887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:21.904789   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.904874   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.905204   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.405273   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.905073   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.905146   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.905464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:23.405279   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.405347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.405687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:23.405741   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:23.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.904659   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.404824   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.404296   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.904463   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.904801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:25.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:26.404631   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.404718   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.405047   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:26.904918   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.904987   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.905309   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.405154   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.405588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.904400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:28.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.404689   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:28.404748   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:28.904331   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.904750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.404573   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.404959   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.904646   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.904725   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.905092   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:30.404773   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.404846   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.405165   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:30.405221   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:30.904956   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.905034   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.905377   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.405001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.405072   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.405325   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.905650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.904301   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.904387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.904648   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:32.904697   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:33.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.404825   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:33.904520   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.904591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.404711   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.904339   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.904412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:34.904798   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:35.404390   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:35.549321   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:35.607106   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:35.610743   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.610780   39129 retry.go:31] will retry after 16.241459848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.905109   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.905200   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.905468   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.404514   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.904881   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.905210   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:36.905281   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:37.404334   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:37.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.904509   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.904813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.404408   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.404481   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:39.404416   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.404510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:39.404920   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:39.904654   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.904746   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.905070   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.404756   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.404825   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.905026   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.905372   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:41.405159   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.405236   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.405596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:41.405654   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:41.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.904410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.404773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.904495   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.904570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.404638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:43.904791   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:44.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:44.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.904643   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.405043   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.405120   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.905241   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.905313   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.905665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:45.905721   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:46.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.404665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:46.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.904443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.904803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.404531   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.404614   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.404913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.553376   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:47.607763   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:47.611288   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.611317   39129 retry.go:31] will retry after 35.21019071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.904951   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.905249   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:48.405085   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.405161   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.405471   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:48.405525   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:48.905284   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.905364   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.905681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.405295   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.405377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.405636   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.404362   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.904691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:50.904742   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:51.404407   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.404485   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.404838   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.852477   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:51.904839   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.904910   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.905174   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.907207   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910785   39129 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:12:52.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:52.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.904765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:52.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:53.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.404945   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:53.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.904430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.404439   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.904458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:55.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.404347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:55.404733   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:55.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.404550   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.404631   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.404976   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.904790   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.904860   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.905139   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:57.404944   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.405013   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.405350   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:57.405406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:57.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.905273   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.905640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.405189   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.405260   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.405511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.905275   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.905353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.905724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.404712   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.904425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:59.904732   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:00.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.404486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:00.904965   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.905043   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.905388   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.405097   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.405176   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.405439   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.904725   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.904806   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.905152   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:01.905207   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:02.404978   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.405084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.405396   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:02.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.905264   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.905532   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.405309   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.405405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.405763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:04.404467   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.404555   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:04.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:04.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.904484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.904554   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.904870   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:06.404545   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.404613   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.404937   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:06.404991   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:06.904732   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.904814   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.905130   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.404806   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.404877   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.405129   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.904906   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.904976   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:08.405133   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.405212   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.405523   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:08.405575   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:08.905290   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.905357   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.404766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.904501   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.904588   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.904943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.404293   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.404651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:10.904861   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:11.404508   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.404642   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:11.904763   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.904841   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.905096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.404345   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.904307   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.904388   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:13.404447   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:13.404835   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.904421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.904745   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.404439   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.904637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.404367   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.904488   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.904581   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:15.904954   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:16.404512   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.404576   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.404846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:16.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.904870   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.404863   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.404963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.405289   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.905011   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.905075   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.905318   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:17.905356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:18.405098   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.405169   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.405467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:18.905238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.905637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.404323   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.904449   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.904524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.904900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:20.404601   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.405009   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:20.405059   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:20.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.904383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.904630   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.404435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.904577   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.904658   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.905033   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.404711   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.404786   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.405042   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.821681   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:13:22.876683   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880396   39129 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:13:22.883693   39129 out.go:179] * Enabled addons: 
	I1211 00:13:22.887530   39129 addons.go:530] duration metric: took 1m46.476102717s for enable addons: enabled=[]
	I1211 00:13:22.904608   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.904678   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.904957   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:22.905000   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:23.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:23.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.404395   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.904476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.904551   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.904854   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:25.404225   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.404302   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.404557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:25.404605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:25.905344   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.905433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.905756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.404719   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.405097   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.904886   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.904949   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:27.404947   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.405328   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:27.405384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:27.905093   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.905485   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.405246   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.405317   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.405598   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.904844   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.904917   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.905225   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:29.405028   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.405117   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.405404   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:29.405449   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:29.905168   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.905247   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.905504   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.405258   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.405331   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.405639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.404468   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.404537   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.904867   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.905218   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:31.905275   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:32.405039   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.405110   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:32.905101   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.905197   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.905510   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.405238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.405316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.905361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.905671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:33.905728   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:34.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.404382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.404620   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:34.904316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.904389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.904718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.404512   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.904617   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.904692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.908415   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:13:35.908524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:36.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:36.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.404692   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.404758   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.405006   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.904670   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.904745   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.905089   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:38.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.404992   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.405353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:38.405405   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:38.905139   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.905213   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.905467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.405228   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.405305   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.904766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.404302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.404373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.904753   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:40.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:41.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.404779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:41.904302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.904720   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:43.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.404647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:43.404698   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:43.904384   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.904781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.404476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.904695   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:45.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:45.404841   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:45.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.904815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.404520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.904418   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.904492   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.904688   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:47.904737   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:48.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.404478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.404831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:48.904535   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.904627   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.904963   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.404424   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.404502   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.904361   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:49.904846   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:50.404484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.404567   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:50.904312   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.904631   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.404397   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.904771   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.904845   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.905178   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:51.905230   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:52.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.404412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:52.904443   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.904515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.904867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.404636   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.404950   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.904654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:54.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:54.904552   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.404658   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.404733   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.405025   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:56.404581   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.404661   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.404984   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:56.405049   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:56.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.404988   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.405064   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.405398   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.905216   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.905575   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.404252   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.404664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.904398   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.904491   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:58.904844   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:59.404513   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.404873   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:59.904538   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.904626   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.904952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.404414   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.404845   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.904385   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.904466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.904782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:01.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.404702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:01.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:01.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.404443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.904253   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.904328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.904579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.404289   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:03.904802   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:04.404326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.404677   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:04.904415   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.904294   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.904370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.904638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:06.404572   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.404651   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.404978   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:06.405038   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:06.904928   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.905005   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.905317   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.405083   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.405159   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.905272   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.905606   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:08.405305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.405379   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.405705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:08.405759   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:08.904405   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.904478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.404479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.404900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.904480   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.904557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.904874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.404389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.904442   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.904520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.904925   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:10.904988   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:11.404652   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.404728   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.405053   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:11.904891   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.904965   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.905216   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.405049   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.405126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.405453   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.905247   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.905323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.905654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:12.905713   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:13.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.404632   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:13.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.404802   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.904475   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.904544   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:15.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:15.404807   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:15.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.404677   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.404753   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.405004   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.904978   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.905048   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:17.405158   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.405235   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.405552   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:17.405610   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:17.904261   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.904334   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.404411   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.404498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.404847   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.904472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.904738   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.404737   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.904453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.904768   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:19.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:20.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.404818   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:20.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.904822   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.404460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.404763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:22.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.404422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.404708   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:22.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:22.904479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.904556   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.904841   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.404574   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.904305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.904373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.404251   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.404672   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.904409   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.904486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:24.904887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:25.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.404461   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.404736   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:25.904509   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.904583   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.404731   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.404818   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.904985   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.905061   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.905327   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:26.905366   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:27.405132   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.405207   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:27.905312   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.905383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.905699   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.404639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:29.404330   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:29.404817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:29.904445   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.904517   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.904836   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.404772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:31.404452   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.404538   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.404813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:31.404867   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:31.904825   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.904902   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.905256   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.405133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.405434   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.905146   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.905216   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.905460   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:33.405223   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.405303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.405614   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:33.405669   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:33.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.904380   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.904353   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.404418   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.904262   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.904332   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:35.904703   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:36.404548   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.404942   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:36.904920   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.905001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.405180   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.405250   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.405549   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:37.904735   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:38.404400   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:38.904471   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.904540   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.904868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.404739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:40.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.404655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:40.404705   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:40.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.404749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.904650   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.904717   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.904964   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:42.404693   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.404775   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.405115   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:42.405176   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:42.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.905044   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.905384   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.405173   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.405244   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.405506   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.904350   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.904566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:44.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:45.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.404848   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:45.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.404605   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.404878   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.905004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.905351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:46.905406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:47.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.405597   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:47.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.904346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.904600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.404314   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.904530   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.904960   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:49.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:49.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:49.904389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.904823   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.404429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.404707   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.904397   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.904467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:51.904876   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:52.404539   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.404611   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.404868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:52.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.904488   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.404507   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.404909   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.904751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:54.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:54.404785   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:54.905091   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.905461   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.405287   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.405536   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.905310   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.905400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.905792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:56.404664   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.404738   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.405079   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:56.405134   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:56.904863   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.904929   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.905177   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.404950   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.405032   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.405383   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.905061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.905135   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.905490   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:58.405233   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.405306   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.405559   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:58.405605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:58.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.904345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.404487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.404786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.904269   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.904338   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.904596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.404353   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.904439   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.904522   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.904908   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:00.904971   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:01.404441   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:01.904840   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.904916   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.905261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.405074   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.405158   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.405505   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.905255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.905626   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:02.905685   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:03.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:03.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.904501   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.904287   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.904363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.904668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:05.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:05.404809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:05.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.904390   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.404538   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.404621   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.404968   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.905084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.905399   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:07.405134   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.405202   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.405455   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:07.405496   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:07.905236   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.905316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.905668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.404259   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.404335   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.404669   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.904348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.904675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.404767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.904456   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.904528   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.904872   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:09.904926   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:10.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.404420   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:10.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.904438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.404399   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.904304   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.904386   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.904651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:12.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.404820   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:12.404875   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:12.904549   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.904630   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.405324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.405622   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.904354   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.404751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:14.904865   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:15.404372   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:15.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.404554   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.904714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.905117   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:16.905186   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:17.404926   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.404997   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.405333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:17.905100   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.905177   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.905446   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.405312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.405665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.904275   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.904355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:19.404415   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.404483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:19.404886   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:19.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.904600   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.904972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:21.404647   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.405062   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:21.405116   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:21.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.905031   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.405138   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.405205   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.905211   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.905339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.905644   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.404356   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.404765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.904406   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:23.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:24.404449   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:24.904567   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.904647   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.904980   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.404591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.404896   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:25.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:26.404543   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.404952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:26.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.905041   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.404714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.404795   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.904869   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.904942   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.905254   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:27.905309   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:28.405022   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.405096   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.405402   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:28.905177   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.905254   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.404313   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.404393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.904395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.904647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:30.404357   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:30.404784   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:30.904438   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.904510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.904846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.404410   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.404482   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.404742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.904715   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.905138   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:32.404902   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.404973   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.405298   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:32.405356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:32.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.905100   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.905353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.405141   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.405225   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.405565   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.904778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.404473   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.404543   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.404861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.904347   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.904417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:34.904828   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:35.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:35.904565   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.904641   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.904947   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.404649   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.404729   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.405029   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.904816   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.904901   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.905206   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:36.905255   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:37.404887   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.404952   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.405287   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:37.904915   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.904985   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.905278   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.405464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.905056   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.905124   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.905378   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:38.905418   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:39.405219   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.405647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:39.904336   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.404291   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.404607   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.904308   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:41.404428   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.404503   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:41.404925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:41.904306   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.904378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.904685   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.404364   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:43.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.405484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:43.405524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:43.905240   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.905312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.905656   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.404373   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.404764   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.904503   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.904579   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.904930   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:45.905003   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:46.404675   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.404755   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.405031   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:46.904971   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.905045   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.905387   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.405184   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.405266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.405600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.904290   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.904358   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:48.404282   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.404778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:48.404837   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:48.904518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.904616   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.904965   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.404851   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.904395   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.904468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.904810   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.904414   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.904743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:50.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:51.404343   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.404705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:51.904585   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.904663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.904998   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.404551   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.404875   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:53.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:53.404819   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:53.904321   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.904670   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.404457   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.904444   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.904531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.904876   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:55.904925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:56.404575   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.404977   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:56.904836   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.904913   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.404951   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.405027   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.405355   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.905048   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.905133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.905458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:57.905511   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:58.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:58.905188   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.905266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.905560   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.404349   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.404684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.904359   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.904655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:00.404418   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.404515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.404905   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:00.404957   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:00.904950   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.905035   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.905354   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.405093   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.405167   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:02.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.404580   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:02.404978   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:02.904523   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.904595   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.904914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.404782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.904572   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.904928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.404607   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.404926   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.904614   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.905032   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:04.905090   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:05.404755   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.404828   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.405160   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:05.904838   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.904911   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.905161   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.405082   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.904483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:07.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.404812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:07.404870   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:07.904508   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.904584   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.404619   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.404701   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.405096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.904962   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:09.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.405142   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.405475   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:09.405528   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:09.905172   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.905256   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.905577   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.404232   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.404303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.404506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.904829   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.904896   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.905193   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:11.905238   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:12.405036   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.405112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:12.905320   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.905392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.905721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:14.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.404817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:14.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:14.904559   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.904931   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.904365   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.904442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:16.404605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.404941   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:16.404981   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:16.905031   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.905112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.905444   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.405328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.405654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.904661   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.404405   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.404476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.904504   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.904587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:18.904955   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:19.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.404743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:19.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.404391   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:21.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.404792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:21.404845   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:21.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.904419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.404420   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.404499   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.904477   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.904552   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.904882   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:23.404436   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.404513   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:23.404895   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:23.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.404381   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.404453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.904499   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.904892   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:25.404586   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.404659   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:25.404961   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:25.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.904779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.404592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.404907   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.905185   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:27.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.405019   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.405351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:27.405411   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:27.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.905014   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.905324   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.405117   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.405196   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.405450   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.905212   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.905284   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.404346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.404683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.904353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:29.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:30.404469   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.404548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.404898   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:30.904597   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.904677   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.904983   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.904776   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.904848   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.905203   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:31.905262   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:32.405016   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.405089   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.405412   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:32.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.905517   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.405261   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.405640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.905301   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.905376   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.905691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:33.905753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:34.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:34.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.904476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.404634   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.404928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.904571   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.904645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.904953   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:36.404628   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.405010   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:36.405063   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:36.904947   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.905022   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.405100   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.405170   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.405498   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.905264   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.905342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.905865   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:38.404591   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.404679   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.405054   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:38.405110   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:38.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.904721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.904517   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.904592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.404740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.904441   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:40.904783   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:41.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:41.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.904798   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.905060   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:42.904822   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:43.404423   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:43.904350   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.904451   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.404461   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.404566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.904550   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.904637   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.904902   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:44.904951   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:45.404609   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:45.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.404515   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.405024   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.904957   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.905029   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:46.905384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:47.405088   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.405172   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:47.905049   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.905126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.905389   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.405194   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.405268   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.405562   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.905286   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.905355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.905692   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:48.905744   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:49.404266   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:49.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.404462   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.404547   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.904650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:51.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.404729   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:51.404787   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:51.904747   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.904831   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.404930   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.405004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.405261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.904990   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.905058   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.905363   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:53.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.405240   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.405638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:53.405695   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:53.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.905363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.905633   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.404809   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.904605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.904687   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.905040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.404740   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.404817   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.405074   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.904834   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:55.904885   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:56.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.404722   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.405063   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:56.904894   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.904963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.905258   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.405073   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.405147   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.405497   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.905415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.905765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:57.905820   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:58.404453   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.404534   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.404874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:58.904362   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.404465   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.404914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.904595   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.904668   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.904932   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:00.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.404598   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.404940   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:00.404990   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:00.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.905010   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.905344   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.405546   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.904448   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.904523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.904989   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:02.404570   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.404663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.405065   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:02.405126   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:02.904936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.905055   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.905428   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.405280   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.405375   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.405771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.904496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.904861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:04.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:05.404438   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.404897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:05.904937   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.905042   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.905515   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.404643   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.404731   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.405084   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.904983   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.905060   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.905437   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:06.905502   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:07.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.405297   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.405568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:07.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.404553   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.404970   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.904274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.904585   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:09.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.404363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:09.404710   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:09.904270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.904653   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.405242   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.405315   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.405609   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.904309   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.904702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:11.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:11.404842   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:11.904759   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.904838   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.905118   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.404893   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.404967   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.405296   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.905088   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.905195   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.905511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.405329   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.405579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:13.405619   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:13.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.904761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.404335   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.404408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.404714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.904317   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.904641   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.404300   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.404706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.904411   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.904487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.904783   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:15.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:16.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.404582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:16.904777   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.904859   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.404976   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.405047   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.405346   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.905085   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.905148   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.905484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:17.905576   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:18.405320   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.405400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.405752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:18.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.904452   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.404450   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.404524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.404839   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.904351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:20.404466   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:20.404929   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:20.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.404759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.404295   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.404368   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.404715   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.904484   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:22.904863   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:23.404333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.404410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.404731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:23.904406   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.404371   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.904474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:24.904898   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:25.404303   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.404370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:25.904598   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.904676   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.905012   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.404650   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.405090   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.904820   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.904890   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.905169   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:26.905212   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:27.404936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.405356   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:27.905133   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.905529   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.405274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.405341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.405686   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:29.404458   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.404541   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:29.404943   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:29.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.904367   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.904684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.404462   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.904507   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.904891   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.404374   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.904703   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.904772   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.908235   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:17:31.908301   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:32.405044   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.405123   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.405443   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:32.905095   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.905166   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.905421   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.405170   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.405251   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.405557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.905635   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:34.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.404675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:34.404722   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:34.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.904444   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.904434   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.904506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.904785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:36.404582   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.404662   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.404987   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:36.405043   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:36.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.904799   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:37.404341   39129 type.go:168] "Request Body" body=""
	I1211 00:17:37.404399   39129 node_ready.go:38] duration metric: took 6m0.000266247s for node "functional-786978" to be "Ready" ...
	I1211 00:17:37.407624   39129 out.go:203] 
	W1211 00:17:37.410619   39129 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1211 00:17:37.410819   39129 out.go:285] * 
	* 
	W1211 00:17:37.413036   39129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:17:37.415867   39129 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-786978 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.484219894s for "functional-786978" cluster.
I1211 00:17:38.000440    4875 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (393.190869ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 logs -n 25: (1.038428837s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdspecific-port511295732/001:/mount-9p --alsologtostderr -v=1 --port 46464                  │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh -- ls -la /mount-9p                                                                                                         │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh sudo umount -f /mount-9p                                                                                                    │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount1 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount3 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount1                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount2 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount2                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh findmnt -T /mount3                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ mount          │ -p functional-976823 --kill=true                                                                                                                  │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format short --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format json --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ ssh            │ functional-976823 ssh pgrep buildkitd                                                                                                             │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ image          │ functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr                                            │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls                                                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format yaml --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format table --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ delete         │ -p functional-976823                                                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ start          │ -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ start          │ -p functional-786978 --alsologtostderr -v=8                                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:11 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:11:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:11:31.563230   39129 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:11:31.563658   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563678   39129 out.go:374] Setting ErrFile to fd 2...
	I1211 00:11:31.563685   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563986   39129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:11:31.564407   39129 out.go:368] Setting JSON to false
	I1211 00:11:31.565211   39129 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1378,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:11:31.565283   39129 start.go:143] virtualization:  
	I1211 00:11:31.568710   39129 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:11:31.572525   39129 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:11:31.572647   39129 notify.go:221] Checking for updates...
	I1211 00:11:31.578309   39129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:11:31.581264   39129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:31.584071   39129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:11:31.586801   39129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:11:31.589632   39129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:11:31.593067   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:31.593203   39129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:11:31.624525   39129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:11:31.624640   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.680227   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.670392474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.680335   39129 docker.go:319] overlay module found
	I1211 00:11:31.683507   39129 out.go:179] * Using the docker driver based on existing profile
	I1211 00:11:31.686334   39129 start.go:309] selected driver: docker
	I1211 00:11:31.686351   39129 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.686457   39129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:11:31.686564   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.744265   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.73545255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.744665   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:31.744728   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:31.744781   39129 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.747938   39129 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:11:31.750895   39129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:11:31.753857   39129 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:11:31.756592   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:31.756636   39129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:11:31.756650   39129 cache.go:65] Caching tarball of preloaded images
	I1211 00:11:31.756687   39129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:11:31.756736   39129 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:11:31.756746   39129 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:11:31.756847   39129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:11:31.775263   39129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:11:31.775283   39129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:11:31.775304   39129 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:11:31.775335   39129 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:11:31.775391   39129 start.go:364] duration metric: took 34.412µs to acquireMachinesLock for "functional-786978"
	I1211 00:11:31.775414   39129 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:11:31.775420   39129 fix.go:54] fixHost starting: 
	I1211 00:11:31.775679   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:31.791888   39129 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:11:31.791920   39129 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:11:31.795111   39129 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:11:31.795143   39129 machine.go:94] provisionDockerMachine start ...
	I1211 00:11:31.795229   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.811419   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.811754   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.811770   39129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:11:31.962366   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:31.962392   39129 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:11:31.962456   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.979928   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.980236   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.980251   39129 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:11:32.139976   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:32.140054   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.158886   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.159253   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.159279   39129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:11:32.307553   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:11:32.307588   39129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:11:32.307609   39129 ubuntu.go:190] setting up certificates
	I1211 00:11:32.307618   39129 provision.go:84] configureAuth start
	I1211 00:11:32.307677   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:32.326881   39129 provision.go:143] copyHostCerts
	I1211 00:11:32.326928   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.326981   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:11:32.326990   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.327094   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:11:32.327189   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327219   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:11:32.327229   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327259   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:11:32.327306   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327328   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:11:32.327337   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327369   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:11:32.327438   39129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:11:32.651770   39129 provision.go:177] copyRemoteCerts
	I1211 00:11:32.651883   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:11:32.651966   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.672496   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:32.786699   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 00:11:32.786771   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:11:32.804288   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 00:11:32.804348   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:11:32.822111   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 00:11:32.822172   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 00:11:32.839310   39129 provision.go:87] duration metric: took 531.679958ms to configureAuth
	I1211 00:11:32.839337   39129 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:11:32.839540   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:32.839656   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.857209   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.857554   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.857577   39129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:11:33.187304   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:11:33.187369   39129 machine.go:97] duration metric: took 1.392217167s to provisionDockerMachine
	I1211 00:11:33.187397   39129 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:11:33.187428   39129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:11:33.187507   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:11:33.187571   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.206116   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.310766   39129 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:11:33.313950   39129 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1211 00:11:33.313971   39129 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1211 00:11:33.313977   39129 command_runner.go:130] > VERSION_ID="12"
	I1211 00:11:33.313982   39129 command_runner.go:130] > VERSION="12 (bookworm)"
	I1211 00:11:33.313987   39129 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1211 00:11:33.313990   39129 command_runner.go:130] > ID=debian
	I1211 00:11:33.313995   39129 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1211 00:11:33.314000   39129 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1211 00:11:33.314006   39129 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1211 00:11:33.314074   39129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:11:33.314099   39129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:11:33.314110   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:11:33.314165   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:11:33.314254   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:11:33.314265   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /etc/ssl/certs/48752.pem
	I1211 00:11:33.314342   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:11:33.314349   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> /etc/test/nested/copy/4875/hosts
	I1211 00:11:33.314395   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:11:33.321833   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:33.338845   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:11:33.355788   39129 start.go:296] duration metric: took 168.358579ms for postStartSetup
	I1211 00:11:33.355933   39129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:11:33.355981   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.374136   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.483570   39129 command_runner.go:130] > 14%
	I1211 00:11:33.484133   39129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:11:33.488331   39129 command_runner.go:130] > 168G
	I1211 00:11:33.488874   39129 fix.go:56] duration metric: took 1.713448769s for fixHost
	I1211 00:11:33.488896   39129 start.go:83] releasing machines lock for "functional-786978", held for 1.713491657s
	I1211 00:11:33.488966   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:33.505970   39129 ssh_runner.go:195] Run: cat /version.json
	I1211 00:11:33.506004   39129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:11:33.506020   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.506067   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.524523   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.532688   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.712031   39129 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1211 00:11:33.714840   39129 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1211 00:11:33.715004   39129 ssh_runner.go:195] Run: systemctl --version
	I1211 00:11:33.720988   39129 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1211 00:11:33.721023   39129 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1211 00:11:33.721418   39129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:11:33.758142   39129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1211 00:11:33.762640   39129 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1211 00:11:33.762695   39129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:11:33.762759   39129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:11:33.770580   39129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:11:33.770605   39129 start.go:496] detecting cgroup driver to use...
	I1211 00:11:33.770636   39129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:11:33.770683   39129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:11:33.785751   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:11:33.798781   39129 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:11:33.798859   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:11:33.814594   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:11:33.828060   39129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:11:33.939426   39129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:11:34.063996   39129 docker.go:234] disabling docker service ...
	I1211 00:11:34.064079   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:11:34.088847   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:11:34.106427   39129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:11:34.233444   39129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:11:34.359250   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:11:34.371772   39129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:11:34.384768   39129 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1211 00:11:34.385910   39129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:11:34.386015   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.395329   39129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:11:34.395408   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.404378   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.412986   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.421585   39129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:11:34.429722   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.438361   39129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.447060   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.456153   39129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:11:34.462793   39129 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1211 00:11:34.463922   39129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:11:34.471096   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:34.576052   39129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:11:34.729272   39129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:11:34.729346   39129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:11:34.732930   39129 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1211 00:11:34.732954   39129 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1211 00:11:34.732962   39129 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1211 00:11:34.732969   39129 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:34.732973   39129 command_runner.go:130] > Access: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732985   39129 command_runner.go:130] > Modify: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732992   39129 command_runner.go:130] > Change: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732995   39129 command_runner.go:130] >  Birth: -
	I1211 00:11:34.733171   39129 start.go:564] Will wait 60s for crictl version
	I1211 00:11:34.733232   39129 ssh_runner.go:195] Run: which crictl
	I1211 00:11:34.736601   39129 command_runner.go:130] > /usr/local/bin/crictl
	I1211 00:11:34.736687   39129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:11:34.757793   39129 command_runner.go:130] > Version:  0.1.0
	I1211 00:11:34.757906   39129 command_runner.go:130] > RuntimeName:  cri-o
	I1211 00:11:34.757921   39129 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1211 00:11:34.757928   39129 command_runner.go:130] > RuntimeApiVersion:  v1
	I1211 00:11:34.760151   39129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:11:34.760230   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.787961   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.787986   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.787993   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.787998   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.788005   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.788009   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.788013   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.788019   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.788024   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.788028   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.788035   39129 command_runner.go:130] >      static
	I1211 00:11:34.788039   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.788043   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.788051   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.788055   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.788058   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.788069   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.788074   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.788080   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.788088   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.789644   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.815359   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.815385   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.815392   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.815397   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.815402   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.815425   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.815432   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.815439   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.815448   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.815452   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.815456   39129 command_runner.go:130] >      static
	I1211 00:11:34.815460   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.815468   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.815473   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.815480   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.815484   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.815491   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.815496   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.815505   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.815512   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.822208   39129 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:11:34.825193   39129 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:11:34.839960   39129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:11:34.843868   39129 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1211 00:11:34.843970   39129 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:11:34.844072   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:34.844127   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.876890   39129 command_runner.go:130] > {
	I1211 00:11:34.876911   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.876915   39129 command_runner.go:130] >     {
	I1211 00:11:34.876923   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.876928   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.876934   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.876937   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876941   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.876951   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.876963   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.876967   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876971   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.876979   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.876984   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.876987   39129 command_runner.go:130] >     },
	I1211 00:11:34.876991   39129 command_runner.go:130] >     {
	I1211 00:11:34.876997   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.877005   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877011   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.877014   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877018   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877026   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.877038   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.877042   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877046   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.877053   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877060   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877067   39129 command_runner.go:130] >     },
	I1211 00:11:34.877070   39129 command_runner.go:130] >     {
	I1211 00:11:34.877077   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.877089   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877094   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.877098   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877113   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877124   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.877132   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.877139   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877143   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.877147   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.877151   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877154   39129 command_runner.go:130] >     },
	I1211 00:11:34.877158   39129 command_runner.go:130] >     {
	I1211 00:11:34.877165   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.877171   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877176   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.877180   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877186   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877194   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.877204   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.877211   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877216   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.877219   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877224   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877234   39129 command_runner.go:130] >       },
	I1211 00:11:34.877242   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877253   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877257   39129 command_runner.go:130] >     },
	I1211 00:11:34.877260   39129 command_runner.go:130] >     {
	I1211 00:11:34.877267   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.877271   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877280   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.877287   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877291   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877299   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.877309   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.877317   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877326   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.877334   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877343   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877347   39129 command_runner.go:130] >       },
	I1211 00:11:34.877351   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877359   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877363   39129 command_runner.go:130] >     },
	I1211 00:11:34.877367   39129 command_runner.go:130] >     {
	I1211 00:11:34.877374   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.877381   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877387   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.877390   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877394   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877411   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.877420   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.877426   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877430   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.877434   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877438   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877441   39129 command_runner.go:130] >       },
	I1211 00:11:34.877445   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877450   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877455   39129 command_runner.go:130] >     },
	I1211 00:11:34.877459   39129 command_runner.go:130] >     {
	I1211 00:11:34.877473   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.877476   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.877490   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877494   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877502   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.877512   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.877516   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877520   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.877527   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877534   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877538   39129 command_runner.go:130] >     },
	I1211 00:11:34.877550   39129 command_runner.go:130] >     {
	I1211 00:11:34.877556   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.877560   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877565   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.877571   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877575   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877582   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.877602   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.877606   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877614   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.877618   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877630   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877633   39129 command_runner.go:130] >       },
	I1211 00:11:34.877636   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877640   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877646   39129 command_runner.go:130] >     },
	I1211 00:11:34.877649   39129 command_runner.go:130] >     {
	I1211 00:11:34.877656   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.877662   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877667   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.877670   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877674   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877681   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.877695   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.877699   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877703   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.877707   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877714   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.877717   39129 command_runner.go:130] >       },
	I1211 00:11:34.877721   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877732   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.877738   39129 command_runner.go:130] >     }
	I1211 00:11:34.877741   39129 command_runner.go:130] >   ]
	I1211 00:11:34.877744   39129 command_runner.go:130] > }
	I1211 00:11:34.877906   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.877920   39129 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:11:34.877980   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.904837   39129 command_runner.go:130] > {
	I1211 00:11:34.904873   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.904879   39129 command_runner.go:130] >     {
	I1211 00:11:34.904887   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.904893   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904899   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.904903   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904925   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.904940   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.904949   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.904958   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904962   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.904966   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.904971   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.904975   39129 command_runner.go:130] >     },
	I1211 00:11:34.904978   39129 command_runner.go:130] >     {
	I1211 00:11:34.904985   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.904989   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904999   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.905010   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905015   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905023   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.905032   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.905038   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905042   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.905046   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905054   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905064   39129 command_runner.go:130] >     },
	I1211 00:11:34.905068   39129 command_runner.go:130] >     {
	I1211 00:11:34.905075   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.905079   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905084   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.905090   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905095   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905103   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.905113   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.905121   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905126   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.905130   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.905134   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905143   39129 command_runner.go:130] >     },
	I1211 00:11:34.905146   39129 command_runner.go:130] >     {
	I1211 00:11:34.905153   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.905162   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905167   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.905170   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905175   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905182   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.905192   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.905195   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905199   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.905209   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905217   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905228   39129 command_runner.go:130] >       },
	I1211 00:11:34.905237   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905244   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905248   39129 command_runner.go:130] >     },
	I1211 00:11:34.905251   39129 command_runner.go:130] >     {
	I1211 00:11:34.905258   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.905262   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905267   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.905272   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905276   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905284   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.905295   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.905302   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905306   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.905310   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905315   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905322   39129 command_runner.go:130] >       },
	I1211 00:11:34.905326   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905330   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905334   39129 command_runner.go:130] >     },
	I1211 00:11:34.905337   39129 command_runner.go:130] >     {
	I1211 00:11:34.905351   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.905355   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905361   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.905368   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905378   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905391   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.905400   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.905408   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905413   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.905417   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905424   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905431   39129 command_runner.go:130] >       },
	I1211 00:11:34.905435   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905439   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905441   39129 command_runner.go:130] >     },
	I1211 00:11:34.905444   39129 command_runner.go:130] >     {
	I1211 00:11:34.905451   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.905457   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905463   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.905466   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905470   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.905492   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.905496   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905500   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.905509   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905513   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905516   39129 command_runner.go:130] >     },
	I1211 00:11:34.905519   39129 command_runner.go:130] >     {
	I1211 00:11:34.905526   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.905535   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905541   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.905544   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905548   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905556   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.905573   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.905577   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905581   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.905585   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905589   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905592   39129 command_runner.go:130] >       },
	I1211 00:11:34.905596   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905604   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905612   39129 command_runner.go:130] >     },
	I1211 00:11:34.905619   39129 command_runner.go:130] >     {
	I1211 00:11:34.905625   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.905629   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905634   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.905637   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905641   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905657   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.905665   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.905671   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905675   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.905679   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905683   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.905686   39129 command_runner.go:130] >       },
	I1211 00:11:34.905690   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905697   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.905700   39129 command_runner.go:130] >     }
	I1211 00:11:34.905703   39129 command_runner.go:130] >   ]
	I1211 00:11:34.905705   39129 command_runner.go:130] > }
	I1211 00:11:34.908324   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.908347   39129 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:11:34.908354   39129 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:11:34.908461   39129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:11:34.908543   39129 ssh_runner.go:195] Run: crio config
	I1211 00:11:34.971791   39129 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1211 00:11:34.971813   39129 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1211 00:11:34.971821   39129 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1211 00:11:34.971824   39129 command_runner.go:130] > #
	I1211 00:11:34.971832   39129 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1211 00:11:34.971839   39129 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1211 00:11:34.971846   39129 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1211 00:11:34.971853   39129 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1211 00:11:34.971857   39129 command_runner.go:130] > # reload'.
	I1211 00:11:34.971875   39129 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1211 00:11:34.971882   39129 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1211 00:11:34.971888   39129 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1211 00:11:34.971894   39129 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1211 00:11:34.971898   39129 command_runner.go:130] > [crio]
	I1211 00:11:34.971903   39129 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1211 00:11:34.971908   39129 command_runner.go:130] > # containers images, in this directory.
	I1211 00:11:34.972453   39129 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1211 00:11:34.972468   39129 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1211 00:11:34.973023   39129 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1211 00:11:34.973035   39129 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1211 00:11:34.973741   39129 command_runner.go:130] > # imagestore = ""
	I1211 00:11:34.973760   39129 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1211 00:11:34.973768   39129 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1211 00:11:34.973950   39129 command_runner.go:130] > # storage_driver = "overlay"
	I1211 00:11:34.973965   39129 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1211 00:11:34.973972   39129 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1211 00:11:34.974083   39129 command_runner.go:130] > # storage_option = [
	I1211 00:11:34.974240   39129 command_runner.go:130] > # ]
	I1211 00:11:34.974255   39129 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1211 00:11:34.974262   39129 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1211 00:11:34.974433   39129 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1211 00:11:34.974477   39129 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1211 00:11:34.974487   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1211 00:11:34.974492   39129 command_runner.go:130] > # always happen on a node reboot
	I1211 00:11:34.974707   39129 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1211 00:11:34.974755   39129 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1211 00:11:34.974769   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1211 00:11:34.974774   39129 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1211 00:11:34.974951   39129 command_runner.go:130] > # version_file_persist = ""
	I1211 00:11:34.974999   39129 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1211 00:11:34.975014   39129 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1211 00:11:34.975286   39129 command_runner.go:130] > # internal_wipe = true
	I1211 00:11:34.975303   39129 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1211 00:11:34.975309   39129 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1211 00:11:34.975533   39129 command_runner.go:130] > # internal_repair = true
	I1211 00:11:34.975547   39129 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1211 00:11:34.975554   39129 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1211 00:11:34.975560   39129 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1211 00:11:34.975800   39129 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1211 00:11:34.975813   39129 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1211 00:11:34.975817   39129 command_runner.go:130] > [crio.api]
	I1211 00:11:34.975838   39129 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1211 00:11:34.976047   39129 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1211 00:11:34.976068   39129 command_runner.go:130] > # IP address on which the stream server will listen.
	I1211 00:11:34.976289   39129 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1211 00:11:34.976305   39129 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1211 00:11:34.976322   39129 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1211 00:11:34.976522   39129 command_runner.go:130] > # stream_port = "0"
	I1211 00:11:34.976537   39129 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1211 00:11:34.976743   39129 command_runner.go:130] > # stream_enable_tls = false
	I1211 00:11:34.976759   39129 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1211 00:11:34.976966   39129 command_runner.go:130] > # stream_idle_timeout = ""
	I1211 00:11:34.976981   39129 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1211 00:11:34.976987   39129 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977102   39129 command_runner.go:130] > # stream_tls_cert = ""
	I1211 00:11:34.977116   39129 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1211 00:11:34.977122   39129 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977375   39129 command_runner.go:130] > # stream_tls_key = ""
	I1211 00:11:34.977408   39129 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1211 00:11:34.977433   39129 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1211 00:11:34.977440   39129 command_runner.go:130] > # automatically pick up the changes.
	I1211 00:11:34.977571   39129 command_runner.go:130] > # stream_tls_ca = ""
	I1211 00:11:34.977641   39129 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977779   39129 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1211 00:11:34.977797   39129 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977991   39129 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1211 00:11:34.978007   39129 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1211 00:11:34.978040   39129 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1211 00:11:34.978056   39129 command_runner.go:130] > [crio.runtime]
	I1211 00:11:34.978069   39129 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1211 00:11:34.978076   39129 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1211 00:11:34.978080   39129 command_runner.go:130] > # "nofile=1024:2048"
	I1211 00:11:34.978086   39129 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1211 00:11:34.978208   39129 command_runner.go:130] > # default_ulimits = [
	I1211 00:11:34.978352   39129 command_runner.go:130] > # ]
	I1211 00:11:34.978369   39129 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1211 00:11:34.978551   39129 command_runner.go:130] > # no_pivot = false
	I1211 00:11:34.978566   39129 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1211 00:11:34.978572   39129 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1211 00:11:34.978723   39129 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1211 00:11:34.978739   39129 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1211 00:11:34.978744   39129 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1211 00:11:34.978775   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.978921   39129 command_runner.go:130] > # conmon = ""
	I1211 00:11:34.978933   39129 command_runner.go:130] > # Cgroup setting for conmon
	I1211 00:11:34.978941   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1211 00:11:34.979286   39129 command_runner.go:130] > conmon_cgroup = "pod"
	I1211 00:11:34.979301   39129 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1211 00:11:34.979307   39129 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1211 00:11:34.979343   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.979348   39129 command_runner.go:130] > # conmon_env = [
	I1211 00:11:34.979496   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979512   39129 command_runner.go:130] > # Additional environment variables to set for all the
	I1211 00:11:34.979518   39129 command_runner.go:130] > # containers. These are overridden if set in the
	I1211 00:11:34.979524   39129 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1211 00:11:34.979552   39129 command_runner.go:130] > # default_env = [
	I1211 00:11:34.979707   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979725   39129 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1211 00:11:34.979734   39129 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1211 00:11:34.979983   39129 command_runner.go:130] > # selinux = false
	I1211 00:11:34.980000   39129 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1211 00:11:34.980009   39129 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1211 00:11:34.980015   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980366   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.980414   39129 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1211 00:11:34.980429   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980434   39129 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1211 00:11:34.980447   39129 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1211 00:11:34.980453   39129 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1211 00:11:34.980464   39129 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1211 00:11:34.980471   39129 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1211 00:11:34.980493   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980499   39129 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1211 00:11:34.980514   39129 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1211 00:11:34.980524   39129 command_runner.go:130] > # the cgroup blockio controller.
	I1211 00:11:34.980678   39129 command_runner.go:130] > # blockio_config_file = ""
	I1211 00:11:34.980713   39129 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1211 00:11:34.980723   39129 command_runner.go:130] > # blockio parameters.
	I1211 00:11:34.980981   39129 command_runner.go:130] > # blockio_reload = false
	I1211 00:11:34.980995   39129 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1211 00:11:34.980999   39129 command_runner.go:130] > # irqbalance daemon.
	I1211 00:11:34.981198   39129 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1211 00:11:34.981209   39129 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1211 00:11:34.981217   39129 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1211 00:11:34.981265   39129 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1211 00:11:34.981385   39129 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1211 00:11:34.981396   39129 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1211 00:11:34.981402   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.981515   39129 command_runner.go:130] > # rdt_config_file = ""
	I1211 00:11:34.981525   39129 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1211 00:11:34.981657   39129 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1211 00:11:34.981668   39129 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1211 00:11:34.981795   39129 command_runner.go:130] > # separate_pull_cgroup = ""
	I1211 00:11:34.981809   39129 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1211 00:11:34.981816   39129 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1211 00:11:34.981820   39129 command_runner.go:130] > # will be added.
	I1211 00:11:34.981926   39129 command_runner.go:130] > # default_capabilities = [
	I1211 00:11:34.982055   39129 command_runner.go:130] > # 	"CHOWN",
	I1211 00:11:34.982151   39129 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1211 00:11:34.982256   39129 command_runner.go:130] > # 	"FSETID",
	I1211 00:11:34.982350   39129 command_runner.go:130] > # 	"FOWNER",
	I1211 00:11:34.982451   39129 command_runner.go:130] > # 	"SETGID",
	I1211 00:11:34.982543   39129 command_runner.go:130] > # 	"SETUID",
	I1211 00:11:34.982687   39129 command_runner.go:130] > # 	"SETPCAP",
	I1211 00:11:34.982695   39129 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1211 00:11:34.982819   39129 command_runner.go:130] > # 	"KILL",
	I1211 00:11:34.982949   39129 command_runner.go:130] > # ]
	I1211 00:11:34.982960   39129 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1211 00:11:34.982993   39129 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1211 00:11:34.983107   39129 command_runner.go:130] > # add_inheritable_capabilities = false
	I1211 00:11:34.983118   39129 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1211 00:11:34.983132   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983136   39129 command_runner.go:130] > default_sysctls = [
	I1211 00:11:34.983272   39129 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1211 00:11:34.983279   39129 command_runner.go:130] > ]
	I1211 00:11:34.983285   39129 command_runner.go:130] > # List of devices on the host that a
	I1211 00:11:34.983300   39129 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1211 00:11:34.983304   39129 command_runner.go:130] > # allowed_devices = [
	I1211 00:11:34.983428   39129 command_runner.go:130] > # 	"/dev/fuse",
	I1211 00:11:34.983527   39129 command_runner.go:130] > # 	"/dev/net/tun",
	I1211 00:11:34.983650   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983660   39129 command_runner.go:130] > # List of additional devices. specified as
	I1211 00:11:34.983668   39129 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1211 00:11:34.983680   39129 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1211 00:11:34.983687   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983813   39129 command_runner.go:130] > # additional_devices = [
	I1211 00:11:34.983820   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983826   39129 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1211 00:11:34.983923   39129 command_runner.go:130] > # cdi_spec_dirs = [
	I1211 00:11:34.984053   39129 command_runner.go:130] > # 	"/etc/cdi",
	I1211 00:11:34.984060   39129 command_runner.go:130] > # 	"/var/run/cdi",
	I1211 00:11:34.984160   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984177   39129 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1211 00:11:34.984184   39129 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1211 00:11:34.984195   39129 command_runner.go:130] > # Defaults to false.
	I1211 00:11:34.984334   39129 command_runner.go:130] > # device_ownership_from_security_context = false
	I1211 00:11:34.984345   39129 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1211 00:11:34.984355   39129 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1211 00:11:34.984488   39129 command_runner.go:130] > # hooks_dir = [
	I1211 00:11:34.984640   39129 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1211 00:11:34.984647   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984653   39129 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1211 00:11:34.984667   39129 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1211 00:11:34.984672   39129 command_runner.go:130] > # its default mounts from the following two files:
	I1211 00:11:34.984675   39129 command_runner.go:130] > #
	I1211 00:11:34.984681   39129 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1211 00:11:34.984694   39129 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1211 00:11:34.984700   39129 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1211 00:11:34.984703   39129 command_runner.go:130] > #
	I1211 00:11:34.984710   39129 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1211 00:11:34.984716   39129 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1211 00:11:34.984722   39129 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1211 00:11:34.984727   39129 command_runner.go:130] > #      only add mounts it finds in this file.
	I1211 00:11:34.984729   39129 command_runner.go:130] > #
	I1211 00:11:34.984883   39129 command_runner.go:130] > # default_mounts_file = ""
	I1211 00:11:34.984900   39129 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1211 00:11:34.984908   39129 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1211 00:11:34.985051   39129 command_runner.go:130] > # pids_limit = -1
	I1211 00:11:34.985062   39129 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1211 00:11:34.985075   39129 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1211 00:11:34.985083   39129 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1211 00:11:34.985091   39129 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1211 00:11:34.985222   39129 command_runner.go:130] > # log_size_max = -1
	I1211 00:11:34.985233   39129 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1211 00:11:34.985372   39129 command_runner.go:130] > # log_to_journald = false
	I1211 00:11:34.985382   39129 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1211 00:11:34.985404   39129 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1211 00:11:34.985411   39129 command_runner.go:130] > # Path to directory for container attach sockets.
	I1211 00:11:34.985416   39129 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1211 00:11:34.985422   39129 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1211 00:11:34.985425   39129 command_runner.go:130] > # bind_mount_prefix = ""
	I1211 00:11:34.985434   39129 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1211 00:11:34.985569   39129 command_runner.go:130] > # read_only = false
	I1211 00:11:34.985580   39129 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1211 00:11:34.985587   39129 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1211 00:11:34.985601   39129 command_runner.go:130] > # live configuration reload.
	I1211 00:11:34.985605   39129 command_runner.go:130] > # log_level = "info"
	I1211 00:11:34.985611   39129 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1211 00:11:34.985616   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.985619   39129 command_runner.go:130] > # log_filter = ""
	I1211 00:11:34.985626   39129 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985632   39129 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1211 00:11:34.985635   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985643   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985647   39129 command_runner.go:130] > # uid_mappings = ""
	I1211 00:11:34.985654   39129 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985660   39129 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1211 00:11:34.985664   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985672   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985681   39129 command_runner.go:130] > # gid_mappings = ""
	I1211 00:11:34.985688   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1211 00:11:34.985694   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985700   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985708   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985712   39129 command_runner.go:130] > # minimum_mappable_uid = -1
	I1211 00:11:34.985718   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1211 00:11:34.985723   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985729   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985737   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985741   39129 command_runner.go:130] > # minimum_mappable_gid = -1
	I1211 00:11:34.985747   39129 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1211 00:11:34.985753   39129 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1211 00:11:34.985759   39129 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1211 00:11:34.985975   39129 command_runner.go:130] > # ctr_stop_timeout = 30
	I1211 00:11:34.985988   39129 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1211 00:11:34.985994   39129 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1211 00:11:34.985999   39129 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1211 00:11:34.986004   39129 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1211 00:11:34.986008   39129 command_runner.go:130] > # drop_infra_ctr = true
	I1211 00:11:34.986014   39129 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1211 00:11:34.986019   39129 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1211 00:11:34.986029   39129 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1211 00:11:34.986033   39129 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1211 00:11:34.986040   39129 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1211 00:11:34.986046   39129 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1211 00:11:34.986051   39129 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1211 00:11:34.986057   39129 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1211 00:11:34.986060   39129 command_runner.go:130] > # shared_cpuset = ""
	I1211 00:11:34.986066   39129 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1211 00:11:34.986071   39129 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1211 00:11:34.986075   39129 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1211 00:11:34.986082   39129 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1211 00:11:34.986085   39129 command_runner.go:130] > # pinns_path = ""
	I1211 00:11:34.986091   39129 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1211 00:11:34.986098   39129 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1211 00:11:34.986101   39129 command_runner.go:130] > # enable_criu_support = true
	I1211 00:11:34.986107   39129 command_runner.go:130] > # Enable/disable the generation of the container,
	I1211 00:11:34.986112   39129 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1211 00:11:34.986116   39129 command_runner.go:130] > # enable_pod_events = false
	I1211 00:11:34.986122   39129 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1211 00:11:34.986131   39129 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1211 00:11:34.986135   39129 command_runner.go:130] > # default_runtime = "crun"
	I1211 00:11:34.986140   39129 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1211 00:11:34.986148   39129 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1211 00:11:34.986159   39129 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1211 00:11:34.986164   39129 command_runner.go:130] > # creation as a file is not desired either.
	I1211 00:11:34.986172   39129 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1211 00:11:34.986177   39129 command_runner.go:130] > # the hostname is being managed dynamically.
	I1211 00:11:34.986181   39129 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1211 00:11:34.986185   39129 command_runner.go:130] > # ]
	I1211 00:11:34.986192   39129 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1211 00:11:34.986198   39129 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1211 00:11:34.986205   39129 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1211 00:11:34.986210   39129 command_runner.go:130] > # Each entry in the table should follow the format:
	I1211 00:11:34.986212   39129 command_runner.go:130] > #
	I1211 00:11:34.986217   39129 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1211 00:11:34.986221   39129 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1211 00:11:34.986226   39129 command_runner.go:130] > # runtime_type = "oci"
	I1211 00:11:34.986231   39129 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1211 00:11:34.986235   39129 command_runner.go:130] > # inherit_default_runtime = false
	I1211 00:11:34.986240   39129 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1211 00:11:34.986244   39129 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1211 00:11:34.986248   39129 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1211 00:11:34.986251   39129 command_runner.go:130] > # monitor_env = []
	I1211 00:11:34.986256   39129 command_runner.go:130] > # privileged_without_host_devices = false
	I1211 00:11:34.986259   39129 command_runner.go:130] > # allowed_annotations = []
	I1211 00:11:34.986265   39129 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1211 00:11:34.986268   39129 command_runner.go:130] > # no_sync_log = false
	I1211 00:11:34.986272   39129 command_runner.go:130] > # default_annotations = {}
	I1211 00:11:34.986276   39129 command_runner.go:130] > # stream_websockets = false
	I1211 00:11:34.986279   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.986309   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.986315   39129 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1211 00:11:34.986324   39129 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1211 00:11:34.986330   39129 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1211 00:11:34.986337   39129 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1211 00:11:34.986340   39129 command_runner.go:130] > #   in $PATH.
	I1211 00:11:34.986346   39129 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1211 00:11:34.986350   39129 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1211 00:11:34.986356   39129 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1211 00:11:34.986359   39129 command_runner.go:130] > #   state.
	I1211 00:11:34.986366   39129 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1211 00:11:34.986375   39129 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1211 00:11:34.986381   39129 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1211 00:11:34.986387   39129 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1211 00:11:34.986392   39129 command_runner.go:130] > #   the values from the default runtime on load time.
	I1211 00:11:34.986398   39129 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1211 00:11:34.986404   39129 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1211 00:11:34.986410   39129 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1211 00:11:34.986417   39129 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1211 00:11:34.986421   39129 command_runner.go:130] > #   The currently recognized values are:
	I1211 00:11:34.986428   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1211 00:11:34.986435   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1211 00:11:34.986440   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1211 00:11:34.986446   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1211 00:11:34.986455   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1211 00:11:34.986462   39129 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1211 00:11:34.986469   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1211 00:11:34.986475   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1211 00:11:34.986481   39129 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1211 00:11:34.986487   39129 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1211 00:11:34.986494   39129 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1211 00:11:34.986500   39129 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1211 00:11:34.986505   39129 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1211 00:11:34.986511   39129 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1211 00:11:34.986517   39129 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1211 00:11:34.986528   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1211 00:11:34.986534   39129 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1211 00:11:34.986538   39129 command_runner.go:130] > #   deprecated option "conmon".
	I1211 00:11:34.986545   39129 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1211 00:11:34.986550   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1211 00:11:34.986556   39129 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1211 00:11:34.986561   39129 command_runner.go:130] > #   should be moved to the container's cgroup
	I1211 00:11:34.986567   39129 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1211 00:11:34.986572   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1211 00:11:34.986579   39129 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1211 00:11:34.986583   39129 command_runner.go:130] > #   conmon-rs by using:
	I1211 00:11:34.986591   39129 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1211 00:11:34.986598   39129 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1211 00:11:34.986606   39129 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1211 00:11:34.986613   39129 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1211 00:11:34.986618   39129 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1211 00:11:34.986625   39129 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1211 00:11:34.986633   39129 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1211 00:11:34.986641   39129 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1211 00:11:34.986651   39129 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1211 00:11:34.986658   39129 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1211 00:11:34.986662   39129 command_runner.go:130] > #   when a machine crash happens.
	I1211 00:11:34.986669   39129 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1211 00:11:34.986677   39129 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1211 00:11:34.986685   39129 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1211 00:11:34.986689   39129 command_runner.go:130] > #   seccomp profile for the runtime.
	I1211 00:11:34.986695   39129 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1211 00:11:34.986702   39129 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1211 00:11:34.986704   39129 command_runner.go:130] > #
	I1211 00:11:34.986708   39129 command_runner.go:130] > # Using the seccomp notifier feature:
	I1211 00:11:34.986711   39129 command_runner.go:130] > #
	I1211 00:11:34.986717   39129 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1211 00:11:34.986724   39129 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1211 00:11:34.986729   39129 command_runner.go:130] > #
	I1211 00:11:34.986739   39129 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1211 00:11:34.986745   39129 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1211 00:11:34.986748   39129 command_runner.go:130] > #
	I1211 00:11:34.986754   39129 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1211 00:11:34.986757   39129 command_runner.go:130] > # feature.
	I1211 00:11:34.986760   39129 command_runner.go:130] > #
	I1211 00:11:34.986766   39129 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1211 00:11:34.986772   39129 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1211 00:11:34.986778   39129 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1211 00:11:34.986784   39129 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1211 00:11:34.986790   39129 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1211 00:11:34.986792   39129 command_runner.go:130] > #
	I1211 00:11:34.986799   39129 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1211 00:11:34.986805   39129 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1211 00:11:34.986808   39129 command_runner.go:130] > #
	I1211 00:11:34.986814   39129 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1211 00:11:34.986820   39129 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1211 00:11:34.986822   39129 command_runner.go:130] > #
	I1211 00:11:34.986828   39129 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1211 00:11:34.986833   39129 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1211 00:11:34.986837   39129 command_runner.go:130] > # limitation.
	I1211 00:11:34.986842   39129 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1211 00:11:34.986846   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1211 00:11:34.986850   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986853   39129 command_runner.go:130] > runtime_root = "/run/crun"
	I1211 00:11:34.986857   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986860   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986864   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.986868   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.986872   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.986876   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.986880   39129 command_runner.go:130] > allowed_annotations = [
	I1211 00:11:34.986887   39129 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1211 00:11:34.986889   39129 command_runner.go:130] > ]
	I1211 00:11:34.986894   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.986898   39129 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1211 00:11:34.986902   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1211 00:11:34.986906   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986909   39129 command_runner.go:130] > runtime_root = "/run/runc"
	I1211 00:11:34.986913   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986917   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986921   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.987106   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.987121   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.987127   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.987132   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.987139   39129 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1211 00:11:34.987147   39129 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1211 00:11:34.987154   39129 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1211 00:11:34.987166   39129 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1211 00:11:34.987177   39129 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1211 00:11:34.987187   39129 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1211 00:11:34.987194   39129 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1211 00:11:34.987200   39129 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1211 00:11:34.987209   39129 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1211 00:11:34.987218   39129 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1211 00:11:34.987224   39129 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1211 00:11:34.987231   39129 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1211 00:11:34.987235   39129 command_runner.go:130] > # Example:
	I1211 00:11:34.987241   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1211 00:11:34.987246   39129 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1211 00:11:34.987251   39129 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1211 00:11:34.987255   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1211 00:11:34.987258   39129 command_runner.go:130] > # cpuset = "0-1"
	I1211 00:11:34.987262   39129 command_runner.go:130] > # cpushares = "5"
	I1211 00:11:34.987269   39129 command_runner.go:130] > # cpuquota = "1000"
	I1211 00:11:34.987273   39129 command_runner.go:130] > # cpuperiod = "100000"
	I1211 00:11:34.987277   39129 command_runner.go:130] > # cpulimit = "35"
	I1211 00:11:34.987280   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.987284   39129 command_runner.go:130] > # The workload name is workload-type.
	I1211 00:11:34.987292   39129 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1211 00:11:34.987298   39129 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1211 00:11:34.987303   39129 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1211 00:11:34.987311   39129 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1211 00:11:34.987317   39129 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1211 00:11:34.987322   39129 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1211 00:11:34.987328   39129 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1211 00:11:34.987332   39129 command_runner.go:130] > # Default value is set to true
	I1211 00:11:34.987336   39129 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1211 00:11:34.987342   39129 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1211 00:11:34.987346   39129 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1211 00:11:34.987350   39129 command_runner.go:130] > # Default value is set to 'false'
	I1211 00:11:34.987355   39129 command_runner.go:130] > # disable_hostport_mapping = false
	I1211 00:11:34.987361   39129 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1211 00:11:34.987369   39129 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1211 00:11:34.987372   39129 command_runner.go:130] > # timezone = ""
	I1211 00:11:34.987379   39129 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1211 00:11:34.987382   39129 command_runner.go:130] > #
	I1211 00:11:34.987387   39129 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1211 00:11:34.987393   39129 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1211 00:11:34.987396   39129 command_runner.go:130] > [crio.image]
	I1211 00:11:34.987402   39129 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1211 00:11:34.987407   39129 command_runner.go:130] > # default_transport = "docker://"
	I1211 00:11:34.987413   39129 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1211 00:11:34.987419   39129 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987423   39129 command_runner.go:130] > # global_auth_file = ""
	I1211 00:11:34.987428   39129 command_runner.go:130] > # The image used to instantiate infra containers.
	I1211 00:11:34.987432   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987442   39129 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.987448   39129 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1211 00:11:34.987454   39129 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987458   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987463   39129 command_runner.go:130] > # pause_image_auth_file = ""
	I1211 00:11:34.987468   39129 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1211 00:11:34.987478   39129 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1211 00:11:34.987484   39129 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1211 00:11:34.987489   39129 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1211 00:11:34.987505   39129 command_runner.go:130] > # pause_command = "/pause"
	I1211 00:11:34.987511   39129 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1211 00:11:34.987518   39129 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1211 00:11:34.987524   39129 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1211 00:11:34.987530   39129 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1211 00:11:34.987536   39129 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1211 00:11:34.987542   39129 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1211 00:11:34.987545   39129 command_runner.go:130] > # pinned_images = [
	I1211 00:11:34.987549   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987555   39129 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1211 00:11:34.987561   39129 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1211 00:11:34.987567   39129 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1211 00:11:34.987574   39129 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1211 00:11:34.987579   39129 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1211 00:11:34.987584   39129 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1211 00:11:34.987589   39129 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1211 00:11:34.987596   39129 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1211 00:11:34.987602   39129 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1211 00:11:34.987608   39129 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1211 00:11:34.987614   39129 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1211 00:11:34.987618   39129 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1211 00:11:34.987624   39129 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1211 00:11:34.987631   39129 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1211 00:11:34.987634   39129 command_runner.go:130] > # changing them here.
	I1211 00:11:34.987643   39129 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1211 00:11:34.987646   39129 command_runner.go:130] > # insecure_registries = [
	I1211 00:11:34.987651   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987657   39129 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1211 00:11:34.987662   39129 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1211 00:11:34.987666   39129 command_runner.go:130] > # image_volumes = "mkdir"
	I1211 00:11:34.987671   39129 command_runner.go:130] > # Temporary directory to use for storing big files
	I1211 00:11:34.987675   39129 command_runner.go:130] > # big_files_temporary_dir = ""
	I1211 00:11:34.987681   39129 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1211 00:11:34.987688   39129 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1211 00:11:34.987692   39129 command_runner.go:130] > # auto_reload_registries = false
	I1211 00:11:34.987698   39129 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1211 00:11:34.987706   39129 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1211 00:11:34.987711   39129 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1211 00:11:34.987715   39129 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1211 00:11:34.987719   39129 command_runner.go:130] > # The mode of short name resolution.
	I1211 00:11:34.987726   39129 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1211 00:11:34.987734   39129 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1211 00:11:34.987739   39129 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1211 00:11:34.987743   39129 command_runner.go:130] > # short_name_mode = "enforcing"
	I1211 00:11:34.987749   39129 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1211 00:11:34.987754   39129 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1211 00:11:34.987763   39129 command_runner.go:130] > # oci_artifact_mount_support = true
	I1211 00:11:34.987770   39129 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1211 00:11:34.987773   39129 command_runner.go:130] > # CNI plugins.
	I1211 00:11:34.987776   39129 command_runner.go:130] > [crio.network]
	I1211 00:11:34.987782   39129 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1211 00:11:34.987787   39129 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1211 00:11:34.987791   39129 command_runner.go:130] > # cni_default_network = ""
	I1211 00:11:34.987797   39129 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1211 00:11:34.987801   39129 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1211 00:11:34.987806   39129 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1211 00:11:34.987809   39129 command_runner.go:130] > # plugin_dirs = [
	I1211 00:11:34.987816   39129 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1211 00:11:34.987819   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987823   39129 command_runner.go:130] > # List of included pod metrics.
	I1211 00:11:34.987827   39129 command_runner.go:130] > # included_pod_metrics = [
	I1211 00:11:34.987830   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987837   39129 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1211 00:11:34.987840   39129 command_runner.go:130] > [crio.metrics]
	I1211 00:11:34.987845   39129 command_runner.go:130] > # Globally enable or disable metrics support.
	I1211 00:11:34.987849   39129 command_runner.go:130] > # enable_metrics = false
	I1211 00:11:34.987853   39129 command_runner.go:130] > # Specify enabled metrics collectors.
	I1211 00:11:34.987859   39129 command_runner.go:130] > # Per default all metrics are enabled.
	I1211 00:11:34.987865   39129 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1211 00:11:34.987871   39129 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1211 00:11:34.987877   39129 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1211 00:11:34.987880   39129 command_runner.go:130] > # metrics_collectors = [
	I1211 00:11:34.987884   39129 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1211 00:11:34.987888   39129 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1211 00:11:34.987892   39129 command_runner.go:130] > # 	"containers_oom_total",
	I1211 00:11:34.987895   39129 command_runner.go:130] > # 	"processes_defunct",
	I1211 00:11:34.987900   39129 command_runner.go:130] > # 	"operations_total",
	I1211 00:11:34.987904   39129 command_runner.go:130] > # 	"operations_latency_seconds",
	I1211 00:11:34.987908   39129 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1211 00:11:34.987912   39129 command_runner.go:130] > # 	"operations_errors_total",
	I1211 00:11:34.987916   39129 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1211 00:11:34.987920   39129 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1211 00:11:34.987924   39129 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1211 00:11:34.987928   39129 command_runner.go:130] > # 	"image_pulls_success_total",
	I1211 00:11:34.987932   39129 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1211 00:11:34.987936   39129 command_runner.go:130] > # 	"containers_oom_count_total",
	I1211 00:11:34.987942   39129 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1211 00:11:34.987946   39129 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1211 00:11:34.987950   39129 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1211 00:11:34.987953   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987962   39129 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1211 00:11:34.987967   39129 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1211 00:11:34.987972   39129 command_runner.go:130] > # The port on which the metrics server will listen.
	I1211 00:11:34.987975   39129 command_runner.go:130] > # metrics_port = 9090
	I1211 00:11:34.987980   39129 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1211 00:11:34.987984   39129 command_runner.go:130] > # metrics_socket = ""
	I1211 00:11:34.987989   39129 command_runner.go:130] > # The certificate for the secure metrics server.
	I1211 00:11:34.987994   39129 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1211 00:11:34.988001   39129 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1211 00:11:34.988005   39129 command_runner.go:130] > # certificate on any modification event.
	I1211 00:11:34.988008   39129 command_runner.go:130] > # metrics_cert = ""
	I1211 00:11:34.988013   39129 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1211 00:11:34.988018   39129 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1211 00:11:34.988021   39129 command_runner.go:130] > # metrics_key = ""
	I1211 00:11:34.988026   39129 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1211 00:11:34.988030   39129 command_runner.go:130] > [crio.tracing]
	I1211 00:11:34.988035   39129 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1211 00:11:34.988038   39129 command_runner.go:130] > # enable_tracing = false
	I1211 00:11:34.988044   39129 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1211 00:11:34.988050   39129 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1211 00:11:34.988056   39129 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1211 00:11:34.988061   39129 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1211 00:11:34.988064   39129 command_runner.go:130] > # CRI-O NRI configuration.
	I1211 00:11:34.988067   39129 command_runner.go:130] > [crio.nri]
	I1211 00:11:34.988071   39129 command_runner.go:130] > # Globally enable or disable NRI.
	I1211 00:11:34.988075   39129 command_runner.go:130] > # enable_nri = true
	I1211 00:11:34.988079   39129 command_runner.go:130] > # NRI socket to listen on.
	I1211 00:11:34.988083   39129 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1211 00:11:34.988087   39129 command_runner.go:130] > # NRI plugin directory to use.
	I1211 00:11:34.988091   39129 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1211 00:11:34.988095   39129 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1211 00:11:34.988100   39129 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1211 00:11:34.988108   39129 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1211 00:11:34.988171   39129 command_runner.go:130] > # nri_disable_connections = false
	I1211 00:11:34.988177   39129 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1211 00:11:34.988182   39129 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1211 00:11:34.988186   39129 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1211 00:11:34.988190   39129 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1211 00:11:34.988194   39129 command_runner.go:130] > # NRI default validator configuration.
	I1211 00:11:34.988201   39129 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1211 00:11:34.988207   39129 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1211 00:11:34.988211   39129 command_runner.go:130] > # can be restricted/rejected:
	I1211 00:11:34.988215   39129 command_runner.go:130] > # - OCI hook injection
	I1211 00:11:34.988220   39129 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1211 00:11:34.988225   39129 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1211 00:11:34.988229   39129 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1211 00:11:34.988233   39129 command_runner.go:130] > # - adjustment of linux namespaces
	I1211 00:11:34.988240   39129 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1211 00:11:34.988246   39129 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1211 00:11:34.988251   39129 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1211 00:11:34.988254   39129 command_runner.go:130] > #
	I1211 00:11:34.988258   39129 command_runner.go:130] > # [crio.nri.default_validator]
	I1211 00:11:34.988262   39129 command_runner.go:130] > # nri_enable_default_validator = false
	I1211 00:11:34.988267   39129 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1211 00:11:34.988272   39129 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1211 00:11:34.988277   39129 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1211 00:11:34.988282   39129 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1211 00:11:34.988287   39129 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1211 00:11:34.988291   39129 command_runner.go:130] > # nri_validator_required_plugins = [
	I1211 00:11:34.988294   39129 command_runner.go:130] > # ]
	I1211 00:11:34.988299   39129 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1211 00:11:34.988306   39129 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1211 00:11:34.988309   39129 command_runner.go:130] > [crio.stats]
	I1211 00:11:34.988316   39129 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1211 00:11:34.988321   39129 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1211 00:11:34.988324   39129 command_runner.go:130] > # stats_collection_period = 0
	I1211 00:11:34.988334   39129 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1211 00:11:34.988341   39129 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1211 00:11:34.988345   39129 command_runner.go:130] > # collection_period = 0
	I1211 00:11:34.988741   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943588402Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1211 00:11:34.988759   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943910852Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1211 00:11:34.988775   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944105801Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1211 00:11:34.988788   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944281599Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1211 00:11:34.988804   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944534263Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.988813   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944919976Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1211 00:11:34.988827   39129 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1211 00:11:34.988906   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:34.988923   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:34.988942   39129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:11:34.988966   39129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:11:34.989098   39129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:11:34.989171   39129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:11:34.996103   39129 command_runner.go:130] > kubeadm
	I1211 00:11:34.996124   39129 command_runner.go:130] > kubectl
	I1211 00:11:34.996130   39129 command_runner.go:130] > kubelet
	I1211 00:11:34.996965   39129 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:11:34.997027   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:11:35.004524   39129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:11:35.022259   39129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:11:35.035877   39129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1211 00:11:35.049665   39129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:11:35.053270   39129 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1211 00:11:35.053410   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:35.173051   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:35.663593   39129 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:11:35.663611   39129 certs.go:195] generating shared ca certs ...
	I1211 00:11:35.663626   39129 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:35.663843   39129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:11:35.663918   39129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:11:35.664081   39129 certs.go:257] generating profile certs ...
	I1211 00:11:35.664282   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:11:35.664361   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:11:35.664489   39129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:11:35.664502   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 00:11:35.664555   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 00:11:35.664574   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 00:11:35.664591   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 00:11:35.664636   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 00:11:35.664653   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 00:11:35.664664   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 00:11:35.664675   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 00:11:35.664773   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:11:35.664811   39129 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:11:35.664825   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:11:35.664885   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:11:35.664944   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:11:35.664975   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:11:35.665087   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:35.665126   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem -> /usr/share/ca-certificates/4875.pem
	I1211 00:11:35.665138   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.665177   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.666144   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:11:35.692413   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:11:35.716263   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:11:35.735120   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:11:35.753386   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:11:35.771269   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:11:35.789331   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:11:35.806153   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:11:35.823663   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:11:35.840043   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:11:35.857281   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:11:35.874656   39129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:11:35.887595   39129 ssh_runner.go:195] Run: openssl version
	I1211 00:11:35.893373   39129 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1211 00:11:35.893766   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.901331   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:11:35.908770   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912293   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912332   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912381   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.953295   39129 command_runner.go:130] > 3ec20f2e
	I1211 00:11:35.953382   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:11:35.960497   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.967487   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:11:35.974778   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978822   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978856   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978928   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:36.019575   39129 command_runner.go:130] > b5213941
	I1211 00:11:36.020060   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:11:36.028538   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.036748   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:11:36.045277   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049492   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049553   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049672   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.092814   39129 command_runner.go:130] > 51391683
	I1211 00:11:36.093356   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:11:36.101223   39129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105165   39129 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105191   39129 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1211 00:11:36.105198   39129 command_runner.go:130] > Device: 259,1	Inode: 1312480     Links: 1
	I1211 00:11:36.105205   39129 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:36.105212   39129 command_runner.go:130] > Access: 2025-12-11 00:07:28.485872476 +0000
	I1211 00:11:36.105217   39129 command_runner.go:130] > Modify: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105222   39129 command_runner.go:130] > Change: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105228   39129 command_runner.go:130] >  Birth: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105288   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:11:36.146158   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.146663   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:11:36.187479   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.187576   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:11:36.228130   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.228568   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:11:36.269072   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.269532   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:11:36.310317   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.310832   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:11:36.353606   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.354067   39129 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:36.354163   39129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:11:36.354246   39129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:11:36.382480   39129 cri.go:89] found id: ""
	I1211 00:11:36.382557   39129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:11:36.389756   39129 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1211 00:11:36.389777   39129 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1211 00:11:36.389784   39129 command_runner.go:130] > /var/lib/minikube/etcd:
	I1211 00:11:36.390708   39129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:11:36.390737   39129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:11:36.390806   39129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:11:36.398342   39129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:11:36.398732   39129 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-786978" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.398833   39129 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-2739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-786978" cluster setting kubeconfig missing "functional-786978" context setting]
	I1211 00:11:36.399137   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.399560   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.399714   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.400253   39129 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 00:11:36.400273   39129 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 00:11:36.400281   39129 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 00:11:36.400286   39129 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 00:11:36.400291   39129 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 00:11:36.400594   39129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:11:36.400697   39129 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1211 00:11:36.409983   39129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1211 00:11:36.410015   39129 kubeadm.go:602] duration metric: took 19.271635ms to restartPrimaryControlPlane
	I1211 00:11:36.410025   39129 kubeadm.go:403] duration metric: took 55.966406ms to StartCluster
	I1211 00:11:36.410041   39129 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410105   39129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.410754   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410951   39129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 00:11:36.411375   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:36.411428   39129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 00:11:36.411496   39129 addons.go:70] Setting storage-provisioner=true in profile "functional-786978"
	I1211 00:11:36.411509   39129 addons.go:239] Setting addon storage-provisioner=true in "functional-786978"
	I1211 00:11:36.411539   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.412103   39129 addons.go:70] Setting default-storageclass=true in profile "functional-786978"
	I1211 00:11:36.412128   39129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-786978"
	I1211 00:11:36.412372   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.412555   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.416027   39129 out.go:179] * Verifying Kubernetes components...
	I1211 00:11:36.418962   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:36.445616   39129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 00:11:36.448584   39129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.448615   39129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 00:11:36.448687   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.455632   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.455806   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.456398   39129 addons.go:239] Setting addon default-storageclass=true in "functional-786978"
	I1211 00:11:36.456432   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.459345   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.488078   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.511255   39129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:36.511282   39129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 00:11:36.511350   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.540894   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.608214   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:36.665748   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.679982   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.404051   39129 node_ready.go:35] waiting up to 6m0s for node "functional-786978" to be "Ready" ...
	I1211 00:11:37.404239   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.404634   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404742   39129 retry.go:31] will retry after 310.125043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404824   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404858   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404893   39129 retry.go:31] will retry after 141.721995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404991   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:37.547464   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.613487   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.613562   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.613592   39129 retry.go:31] will retry after 561.758211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.715754   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:37.779510   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.779557   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.779585   39129 retry.go:31] will retry after 505.869102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.904810   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.904884   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.175539   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.243137   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.243185   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.243204   39129 retry.go:31] will retry after 361.539254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.286533   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:38.344606   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.348111   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.348157   39129 retry.go:31] will retry after 829.218438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.404431   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.404511   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.404881   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.605429   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.661283   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.664833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.664864   39129 retry.go:31] will retry after 800.266997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.905185   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.905301   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.905646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:39.178251   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:39.238429   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.238472   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.238493   39129 retry.go:31] will retry after 1.184749907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.405001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.405348   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:39.405424   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:39.465581   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:39.526474   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.526525   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.526544   39129 retry.go:31] will retry after 1.807004704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.905105   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.905423   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.405603   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.423936   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:40.495739   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:40.495794   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.495811   39129 retry.go:31] will retry after 1.404783651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.334388   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:41.396786   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.396852   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.396891   39129 retry.go:31] will retry after 1.10995967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.405068   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.405184   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.405534   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:41.405602   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:41.901437   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:41.905007   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.905077   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.905313   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.984043   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.984104   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.984123   39129 retry.go:31] will retry after 1.551735429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.404784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:42.507069   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:42.562010   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:42.565655   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.565695   39129 retry.go:31] will retry after 1.834850552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.904273   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.904413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.904767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.404422   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.536095   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:43.596578   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:43.596618   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.596641   39129 retry.go:31] will retry after 3.759083682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.905026   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.905109   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.905424   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:43.905474   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:44.401015   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:44.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.404608   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:44.466004   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:44.470131   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.470162   39129 retry.go:31] will retry after 3.734519465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.904450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.904746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.404448   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.404610   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.405391   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.905314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.905389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.905730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:45.905817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:46.404489   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.404597   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.404850   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:46.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.904888   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.905184   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.356864   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:47.404412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.420245   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:47.420295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.420315   39129 retry.go:31] will retry after 2.851566945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.904846   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.904912   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.905167   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:48.205865   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:48.269575   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:48.269614   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.269633   39129 retry.go:31] will retry after 3.250947796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.404858   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.404932   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.405259   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:48.405314   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:48.905121   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.905582   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.404258   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.404342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.272194   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:50.327238   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:50.331229   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.331261   39129 retry.go:31] will retry after 4.377849152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.404603   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.404681   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.404972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.904412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:50.904763   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:51.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.404469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:51.521211   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:51.575865   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:51.579753   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.579788   39129 retry.go:31] will retry after 10.380601314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.905566   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.405257   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.405613   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.904681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:53.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:53.404852   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:53.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.904440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.904804   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.404471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.404754   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.709241   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:54.767641   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:54.771055   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.771086   39129 retry.go:31] will retry after 5.957769887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.904303   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.904730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.404312   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.404383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.404693   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.904394   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:55.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:56.404616   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.404692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.405015   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:56.904919   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.904989   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.905263   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.405131   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.905419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.905761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:57.905821   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:58.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.404407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.404667   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:58.904372   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.404718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.904404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:00.404425   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.404531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.404943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:00.405022   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:00.729113   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:00.791242   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:00.794799   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.794830   39129 retry.go:31] will retry after 11.484844112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.905270   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.405214   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.405547   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.904696   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.904770   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.905114   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.961328   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:02.020749   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:02.024939   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.024971   39129 retry.go:31] will retry after 14.651232328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:02.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:02.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:03.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:03.904466   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.904548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.404457   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.404546   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.904381   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.904772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:04.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:05.404564   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.404650   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.405040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:05.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.404608   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.404684   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.405046   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.905071   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.905390   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:06.905442   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:07.405193   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.405265   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.405584   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:07.904280   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.904352   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.404398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.904498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:09.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.404791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:09.404848   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:09.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.404523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.904428   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.904505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.904831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:11.904892   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:12.280537   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:12.342793   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:12.342833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.342853   39129 retry.go:31] will retry after 23.205348466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.405205   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.405602   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:12.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.904717   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.404271   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.905297   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.905373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.905750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:13.905805   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:14.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:14.904352   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.904734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.904784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:16.404614   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.404686   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.405057   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:16.405114   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:16.676815   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:16.732715   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:16.736183   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.736213   39129 retry.go:31] will retry after 30.816141509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.404776   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.904286   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.904361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.904615   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.404395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.904448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.904755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:18.904810   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:19.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.404533   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:19.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.904394   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.904694   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:21.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.404473   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:21.404887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:21.904789   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.904874   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.905204   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.405273   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.905073   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.905146   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.905464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:23.405279   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.405347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.405687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:23.405741   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:23.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.904659   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.404824   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.404296   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.904463   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.904801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:25.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:26.404631   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.404718   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.405047   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:26.904918   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.904987   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.905309   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.405154   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.405588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.904400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:28.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.404689   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:28.404748   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:28.904331   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.904750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.404573   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.404959   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.904646   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.904725   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.905092   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:30.404773   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.404846   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.405165   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:30.405221   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:30.904956   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.905034   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.905377   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.405001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.405072   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.405325   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.905650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.904301   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.904387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.904648   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:32.904697   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:33.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.404825   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:33.904520   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.904591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.404711   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.904339   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.904412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:34.904798   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:35.404390   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:35.549321   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:35.607106   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:35.610743   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.610780   39129 retry.go:31] will retry after 16.241459848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.905109   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.905200   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.905468   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.404514   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.904881   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.905210   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:36.905281   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:37.404334   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:37.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.904509   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.904813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.404408   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.404481   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:39.404416   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.404510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:39.404920   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:39.904654   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.904746   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.905070   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.404756   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.404825   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.905026   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.905372   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:41.405159   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.405236   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.405596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:41.405654   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:41.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.904410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.404773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.904495   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.904570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.404638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:43.904791   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:44.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:44.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.904643   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.405043   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.405120   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.905241   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.905313   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.905665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:45.905721   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:46.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.404665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:46.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.904443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.904803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.404531   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.404614   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.404913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.553376   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:47.607763   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:47.611288   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.611317   39129 retry.go:31] will retry after 35.21019071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.904951   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.905249   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:48.405085   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.405161   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.405471   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:48.405525   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:48.905284   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.905364   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.905681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.405295   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.405377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.405636   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.404362   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.904691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:50.904742   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:51.404407   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.404485   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.404838   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.852477   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:51.904839   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.904910   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.905174   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.907207   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910785   39129 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:12:52.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:52.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.904765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:52.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:53.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.404945   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:53.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.904430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.404439   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.904458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:55.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.404347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:55.404733   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:55.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.404550   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.404631   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.404976   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.904790   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.904860   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.905139   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:57.404944   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.405013   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.405350   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:57.405406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:57.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.905273   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.905640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.405189   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.405260   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.405511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.905275   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.905353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.905724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.404712   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.904425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:59.904732   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:00.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.404486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:00.904965   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.905043   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.905388   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.405097   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.405176   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.405439   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.904725   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.904806   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.905152   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:01.905207   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:02.404978   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.405084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.405396   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:02.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.905264   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.905532   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.405309   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.405405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.405763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:04.404467   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.404555   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:04.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:04.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.904484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.904554   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.904870   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:06.404545   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.404613   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.404937   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:06.404991   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:06.904732   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.904814   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.905130   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.404806   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.404877   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.405129   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.904906   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.904976   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:08.405133   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.405212   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.405523   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:08.405575   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:08.905290   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.905357   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.404766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.904501   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.904588   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.904943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.404293   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.404651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:10.904861   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:11.404508   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.404642   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:11.904763   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.904841   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.905096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.404345   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.904307   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.904388   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:13.404447   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:13.404835   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.904421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.904745   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.404439   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.904637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.404367   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.904488   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.904581   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:15.904954   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:16.404512   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.404576   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.404846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:16.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.904870   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.404863   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.404963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.405289   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.905011   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.905075   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.905318   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:17.905356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:18.405098   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.405169   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.405467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:18.905238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.905637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.404323   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.904449   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.904524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.904900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:20.404601   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.405009   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:20.405059   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:20.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.904383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.904630   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.404435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.904577   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.904658   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.905033   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.404711   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.404786   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.405042   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.821681   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:13:22.876683   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880396   39129 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:13:22.883693   39129 out.go:179] * Enabled addons: 
	I1211 00:13:22.887530   39129 addons.go:530] duration metric: took 1m46.476102717s for enable addons: enabled=[]
	I1211 00:13:22.904608   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.904678   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.904957   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:22.905000   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:23.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:23.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.404395   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.904476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.904551   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.904854   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:25.404225   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.404302   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.404557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:25.404605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:25.905344   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.905433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.905756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.404719   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.405097   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.904886   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.904949   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:27.404947   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.405328   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:27.405384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:27.905093   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.905485   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.405246   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.405317   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.405598   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.904844   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.904917   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.905225   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:29.405028   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.405117   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.405404   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:29.405449   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:29.905168   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.905247   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.905504   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.405258   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.405331   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.405639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.404468   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.404537   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.904867   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.905218   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:31.905275   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:32.405039   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.405110   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:32.905101   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.905197   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.905510   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.405238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.405316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.905361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.905671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:33.905728   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:34.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.404382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.404620   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:34.904316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.904389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.904718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.404512   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.904617   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.904692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.908415   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:13:35.908524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:36.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:36.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.404692   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.404758   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.405006   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.904670   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.904745   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.905089   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:38.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.404992   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.405353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:38.405405   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:38.905139   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.905213   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.905467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.405228   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.405305   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.904766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.404302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.404373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.904753   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:40.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:41.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.404779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:41.904302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.904720   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:43.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.404647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:43.404698   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:43.904384   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.904781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.404476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.904695   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:45.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:45.404841   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:45.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.904815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.404520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.904418   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.904492   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.904688   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:47.904737   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:48.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.404478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.404831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:48.904535   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.904627   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.904963   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.404424   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.404502   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.904361   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:49.904846   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:50.404484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.404567   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:50.904312   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.904631   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.404397   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.904771   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.904845   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.905178   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:51.905230   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:52.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.404412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:52.904443   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.904515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.904867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.404636   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.404950   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.904654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:54.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:54.904552   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.404658   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.404733   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.405025   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:56.404581   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.404661   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.404984   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:56.405049   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:56.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.404988   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.405064   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.405398   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.905216   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.905575   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.404252   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.404664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.904398   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.904491   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:58.904844   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:59.404513   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.404873   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:59.904538   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.904626   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.904952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.404414   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.404845   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.904385   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.904466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.904782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:01.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.404702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:01.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:01.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.404443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.904253   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.904328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.904579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.404289   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:03.904802   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:04.404326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.404677   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:04.904415   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.904294   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.904370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.904638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:06.404572   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.404651   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.404978   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:06.405038   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:06.904928   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.905005   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.905317   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.405083   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.405159   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.905272   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.905606   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:08.405305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.405379   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.405705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:08.405759   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:08.904405   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.904478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.404479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.404900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.904480   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.904557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.904874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.404389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.904442   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.904520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.904925   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:10.904988   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:11.404652   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.404728   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.405053   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:11.904891   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.904965   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.905216   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.405049   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.405126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.405453   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.905247   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.905323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.905654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:12.905713   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:13.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.404632   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:13.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.404802   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.904475   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.904544   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:15.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:15.404807   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:15.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.404677   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.404753   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.405004   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.904978   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.905048   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:17.405158   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.405235   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.405552   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:17.405610   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:17.904261   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.904334   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.404411   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.404498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.404847   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.904472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.904738   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.404737   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.904453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.904768   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:19.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:20.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.404818   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:20.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.904822   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.404460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.404763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:22.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.404422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.404708   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:22.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:22.904479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.904556   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.904841   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.404574   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.904305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.904373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.404251   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.404672   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.904409   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.904486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:24.904887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:25.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.404461   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.404736   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:25.904509   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.904583   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.404731   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.404818   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.904985   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.905061   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.905327   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:26.905366   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:27.405132   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.405207   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:27.905312   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.905383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.905699   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.404639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:29.404330   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:29.404817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:29.904445   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.904517   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.904836   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.404772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:31.404452   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.404538   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.404813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:31.404867   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:31.904825   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.904902   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.905256   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.405133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.405434   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.905146   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.905216   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.905460   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:33.405223   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.405303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.405614   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:33.405669   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:33.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.904380   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.904353   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.404418   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.904262   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.904332   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:35.904703   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:36.404548   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.404942   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:36.904920   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.905001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.405180   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.405250   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.405549   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:37.904735   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:38.404400   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:38.904471   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.904540   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.904868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.404739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:40.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.404655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:40.404705   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:40.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.404749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.904650   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.904717   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.904964   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:42.404693   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.404775   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.405115   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:42.405176   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:42.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.905044   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.905384   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.405173   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.405244   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.405506   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.904350   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.904566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:44.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:45.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.404848   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:45.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.404605   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.404878   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.905004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.905351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:46.905406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:47.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.405597   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:47.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.904346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.904600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.404314   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.904530   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.904960   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:49.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:49.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:49.904389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.904823   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.404429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.404707   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.904397   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.904467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:51.904876   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:52.404539   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.404611   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.404868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:52.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.904488   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.404507   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.404909   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.904751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:54.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:54.404785   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:54.905091   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.905461   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.405287   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.405536   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.905310   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.905400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.905792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:56.404664   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.404738   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.405079   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:56.405134   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:56.904863   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.904929   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.905177   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.404950   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.405032   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.405383   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.905061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.905135   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.905490   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:58.405233   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.405306   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.405559   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:58.405605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:58.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.904345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.404487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.404786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.904269   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.904338   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.904596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.404353   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.904439   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.904522   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.904908   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:00.904971   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:01.404441   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:01.904840   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.904916   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.905261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.405074   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.405158   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.405505   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.905255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.905626   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:02.905685   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:03.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:03.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.904501   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.904287   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.904363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.904668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:05.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:05.404809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:05.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.904390   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.404538   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.404621   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.404968   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.905084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.905399   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:07.405134   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.405202   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.405455   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:07.405496   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:07.905236   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.905316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.905668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.404259   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.404335   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.404669   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.904348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.904675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.404767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.904456   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.904528   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.904872   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:09.904926   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:10.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.404420   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:10.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.904438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.404399   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.904304   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.904386   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.904651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:12.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.404820   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:12.404875   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:12.904549   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.904630   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.405324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.405622   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.904354   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.404751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:14.904865   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:15.404372   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:15.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.404554   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.904714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.905117   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:16.905186   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:17.404926   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.404997   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.405333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:17.905100   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.905177   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.905446   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.405312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.405665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.904275   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.904355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:19.404415   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.404483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:19.404886   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:19.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.904600   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.904972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:21.404647   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.405062   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:21.405116   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:21.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.905031   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.405138   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.405205   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.905211   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.905339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.905644   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.404356   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.404765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.904406   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:23.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:24.404449   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:24.904567   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.904647   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.904980   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.404591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.404896   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:25.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:26.404543   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.404952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:26.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.905041   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.404714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.404795   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.904869   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.904942   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.905254   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:27.905309   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:28.405022   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.405096   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.405402   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:28.905177   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.905254   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.404313   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.404393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.904395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.904647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:30.404357   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:30.404784   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:30.904438   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.904510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.904846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.404410   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.404482   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.404742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.904715   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.905138   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:32.404902   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.404973   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.405298   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:32.405356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:32.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.905100   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.905353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.405141   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.405225   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.405565   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.904778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.404473   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.404543   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.404861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.904347   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.904417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:34.904828   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:35.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:35.904565   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.904641   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.904947   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.404649   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.404729   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.405029   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.904816   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.904901   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.905206   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:36.905255   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:37.404887   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.404952   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.405287   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:37.904915   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.904985   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.905278   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.405464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.905056   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.905124   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.905378   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:38.905418   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:39.405219   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.405647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:39.904336   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.404291   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.404607   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.904308   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:41.404428   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.404503   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:41.404925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:41.904306   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.904378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.904685   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.404364   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:43.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.405484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:43.405524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:43.905240   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.905312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.905656   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.404373   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.404764   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.904503   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.904579   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.904930   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:45.905003   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:46.404675   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.404755   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.405031   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:46.904971   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.905045   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.905387   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.405184   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.405266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.405600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.904290   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.904358   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:48.404282   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.404778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:48.404837   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:48.904518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.904616   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.904965   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.404851   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.904395   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.904468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.904810   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.904414   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.904743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:50.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:51.404343   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.404705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:51.904585   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.904663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.904998   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.404551   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.404875   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:53.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:53.404819   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:53.904321   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.904670   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.404457   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.904444   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.904531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.904876   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:55.904925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:56.404575   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.404977   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:56.904836   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.904913   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.404951   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.405027   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.405355   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.905048   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.905133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.905458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:57.905511   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:58.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:58.905188   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.905266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.905560   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.404349   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.404684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.904359   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.904655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:00.404418   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.404515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.404905   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:00.404957   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:00.904950   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.905035   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.905354   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.405093   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.405167   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:02.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.404580   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:02.404978   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:02.904523   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.904595   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.904914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.404782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.904572   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.904928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.404607   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.404926   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.904614   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.905032   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:04.905090   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:05.404755   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.404828   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.405160   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:05.904838   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.904911   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.905161   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.405082   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.904483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:07.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.404812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:07.404870   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:07.904508   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.904584   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.404619   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.404701   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.405096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.904962   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:09.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.405142   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.405475   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:09.405528   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:09.905172   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.905256   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.905577   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.404232   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.404303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.404506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.904829   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.904896   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.905193   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:11.905238   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:12.405036   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.405112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:12.905320   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.905392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.905721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:14.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.404817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:14.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:14.904559   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.904931   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.904365   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.904442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:16.404605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.404941   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:16.404981   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:16.905031   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.905112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.905444   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.405328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.405654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.904661   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.404405   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.404476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.904504   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.904587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:18.904955   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:19.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.404743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:19.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.404391   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:21.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.404792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:21.404845   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:21.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.904419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.404420   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.404499   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.904477   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.904552   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.904882   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:23.404436   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.404513   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:23.404895   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:23.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.404381   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.404453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.904499   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.904892   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:25.404586   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.404659   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:25.404961   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:25.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.904779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.404592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.404907   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.905185   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:27.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.405019   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.405351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:27.405411   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:27.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.905014   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.905324   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.405117   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.405196   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.405450   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.905212   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.905284   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.404346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.404683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.904353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:29.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:30.404469   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.404548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.404898   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:30.904597   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.904677   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.904983   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.904776   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.904848   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.905203   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:31.905262   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:32.405016   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.405089   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.405412   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:32.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.905517   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.405261   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.405640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.905301   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.905376   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.905691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:33.905753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:34.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:34.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.904476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.404634   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.404928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.904571   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.904645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.904953   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:36.404628   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.405010   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:36.405063   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:36.904947   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.905022   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.405100   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.405170   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.405498   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.905264   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.905342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.905865   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:38.404591   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.404679   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.405054   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:38.405110   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:38.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.904721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.904517   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.904592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.404740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.904441   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:40.904783   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:41.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:41.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.904798   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.905060   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:42.904822   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:43.404423   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:43.904350   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.904451   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.404461   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.404566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.904550   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.904637   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.904902   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:44.904951   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:45.404609   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:45.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.404515   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.405024   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.904957   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.905029   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:46.905384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:47.405088   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.405172   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:47.905049   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.905126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.905389   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.405194   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.405268   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.405562   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.905286   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.905355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.905692   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:48.905744   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:49.404266   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:49.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.404462   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.404547   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.904650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:51.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.404729   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:51.404787   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:51.904747   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.904831   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.404930   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.405004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.405261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.904990   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.905058   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.905363   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:53.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.405240   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.405638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:53.405695   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:53.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.905363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.905633   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.404809   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.904605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.904687   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.905040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.404740   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.404817   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.405074   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.904834   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:55.904885   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:56.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.404722   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.405063   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:56.904894   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.904963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.905258   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.405073   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.405147   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.405497   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.905415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.905765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:57.905820   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:58.404453   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.404534   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.404874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:58.904362   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.404465   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.404914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.904595   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.904668   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.904932   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:00.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.404598   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.404940   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:00.404990   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:00.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.905010   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.905344   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.405546   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.904448   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.904523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.904989   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:02.404570   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.404663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.405065   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:02.405126   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:02.904936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.905055   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.905428   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.405280   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.405375   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.405771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.904496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.904861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:04.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:05.404438   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.404897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:05.904937   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.905042   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.905515   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.404643   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.404731   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.405084   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.904983   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.905060   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.905437   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:06.905502   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:07.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.405297   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.405568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:07.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.404553   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.404970   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.904274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.904585   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:09.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.404363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:09.404710   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:09.904270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.904653   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.405242   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.405315   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.405609   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.904309   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.904702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:11.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:11.404842   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:11.904759   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.904838   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.905118   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.404893   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.404967   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.405296   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.905088   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.905195   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.905511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.405329   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.405579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:13.405619   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:13.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.904761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.404335   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.404408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.404714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.904317   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.904641   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.404300   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.404706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.904411   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.904487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.904783   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:15.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:16.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.404582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:16.904777   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.904859   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.404976   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.405047   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.405346   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.905085   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.905148   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.905484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:17.905576   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:18.405320   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.405400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.405752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:18.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.904452   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.404450   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.404524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.404839   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.904351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:20.404466   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:20.404929   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:20.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.404759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.404295   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.404368   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.404715   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.904484   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:22.904863   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:23.404333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.404410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.404731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:23.904406   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.404371   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.904474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:24.904898   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:25.404303   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.404370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:25.904598   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.904676   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.905012   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.404650   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.405090   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.904820   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.904890   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.905169   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:26.905212   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:27.404936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.405356   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:27.905133   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.905529   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.405274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.405341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.405686   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:29.404458   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.404541   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:29.404943   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:29.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.904367   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.904684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.404462   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.904507   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.904891   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.404374   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.904703   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.904772   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.908235   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:17:31.908301   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:32.405044   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.405123   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.405443   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:32.905095   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.905166   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.905421   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.405170   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.405251   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.405557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.905635   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:34.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.404675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:34.404722   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:34.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.904444   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.904434   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.904506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.904785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:36.404582   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.404662   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.404987   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:36.405043   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:36.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.904799   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:37.404341   39129 type.go:168] "Request Body" body=""
	I1211 00:17:37.404399   39129 node_ready.go:38] duration metric: took 6m0.000266247s for node "functional-786978" to be "Ready" ...
	I1211 00:17:37.407624   39129 out.go:203] 
	W1211 00:17:37.410619   39129 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1211 00:17:37.410819   39129 out.go:285] * 
	W1211 00:17:37.413036   39129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:17:37.415867   39129 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.683978999Z" level=info msg="Using the internal default seccomp profile"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.683987065Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.68399408Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684000078Z" level=info msg="RDT not available in the host system"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684012575Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684721181Z" level=info msg="Conmon does support the --sync option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684744254Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684759869Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.685472873Z" level=info msg="Conmon does support the --sync option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.685489841Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.685616727Z" level=info msg="Updated default CNI network name to "
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.686203986Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.686552619Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.686608021Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.72384117Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.723993648Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724046014Z" level=info msg="Create NRI interface"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724146438Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724162447Z" level=info msg="runtime interface created"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724176445Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724182738Z" level=info msg="runtime interface starting up..."
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.72418872Z" level=info msg="starting plugins..."
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724200634Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724260606Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:11:34 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:17:39.408653    8612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:39.409072    8612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:39.410680    8612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:39.411250    8612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:39.412817    8612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:17:39 up 29 min,  0 user,  load average: 0.25, 0.27, 0.46
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:17:36 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:37 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 11 00:17:37 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:37 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:37 functional-786978 kubelet[8502]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:37 functional-786978 kubelet[8502]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:37 functional-786978 kubelet[8502]: E1211 00:17:37.497659    8502 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:37 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:37 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:38 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 11 00:17:38 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:38 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:38 functional-786978 kubelet[8507]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:38 functional-786978 kubelet[8507]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:38 functional-786978 kubelet[8507]: E1211 00:17:38.217467    8507 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:38 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:38 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:38 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 11 00:17:38 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:38 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:38 functional-786978 kubelet[8528]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:38 functional-786978 kubelet[8528]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:38 functional-786978 kubelet[8528]: E1211 00:17:38.961703    8528 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:38 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:38 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (414.102537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-786978 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-786978 get po -A: exit status 1 (62.951979ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-786978 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-786978 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-786978 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (312.782861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 logs -n 25: (1.075873726s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdspecific-port511295732/001:/mount-9p --alsologtostderr -v=1 --port 46464                  │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh -- ls -la /mount-9p                                                                                                         │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh sudo umount -f /mount-9p                                                                                                    │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount1 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount3 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount1                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ mount          │ -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount2 --alsologtostderr -v=1                                │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ ssh            │ functional-976823 ssh findmnt -T /mount2                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ ssh            │ functional-976823 ssh findmnt -T /mount3                                                                                                          │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │ 11 Dec 25 00:02 UTC │
	│ mount          │ -p functional-976823 --kill=true                                                                                                                  │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:02 UTC │                     │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ update-context │ functional-976823 update-context --alsologtostderr -v=2                                                                                           │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format short --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format json --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ ssh            │ functional-976823 ssh pgrep buildkitd                                                                                                             │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ image          │ functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr                                            │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls                                                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format yaml --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image          │ functional-976823 image ls --format table --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ delete         │ -p functional-976823                                                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ start          │ -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ start          │ -p functional-786978 --alsologtostderr -v=8                                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:11 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:11:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:11:31.563230   39129 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:11:31.563658   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563678   39129 out.go:374] Setting ErrFile to fd 2...
	I1211 00:11:31.563685   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563986   39129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:11:31.564407   39129 out.go:368] Setting JSON to false
	I1211 00:11:31.565211   39129 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1378,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:11:31.565283   39129 start.go:143] virtualization:  
	I1211 00:11:31.568710   39129 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:11:31.572525   39129 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:11:31.572647   39129 notify.go:221] Checking for updates...
	I1211 00:11:31.578309   39129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:11:31.581264   39129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:31.584071   39129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:11:31.586801   39129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:11:31.589632   39129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:11:31.593067   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:31.593203   39129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:11:31.624525   39129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:11:31.624640   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.680227   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.670392474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.680335   39129 docker.go:319] overlay module found
	I1211 00:11:31.683507   39129 out.go:179] * Using the docker driver based on existing profile
	I1211 00:11:31.686334   39129 start.go:309] selected driver: docker
	I1211 00:11:31.686351   39129 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.686457   39129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:11:31.686564   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.744265   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.73545255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.744665   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:31.744728   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:31.744781   39129 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.747938   39129 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:11:31.750895   39129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:11:31.753857   39129 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:11:31.756592   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:31.756636   39129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:11:31.756650   39129 cache.go:65] Caching tarball of preloaded images
	I1211 00:11:31.756687   39129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:11:31.756736   39129 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:11:31.756746   39129 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:11:31.756847   39129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:11:31.775263   39129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:11:31.775283   39129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:11:31.775304   39129 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:11:31.775335   39129 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:11:31.775391   39129 start.go:364] duration metric: took 34.412µs to acquireMachinesLock for "functional-786978"
	I1211 00:11:31.775414   39129 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:11:31.775420   39129 fix.go:54] fixHost starting: 
	I1211 00:11:31.775679   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:31.791888   39129 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:11:31.791920   39129 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:11:31.795111   39129 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:11:31.795143   39129 machine.go:94] provisionDockerMachine start ...
	I1211 00:11:31.795229   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.811419   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.811754   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.811770   39129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:11:31.962366   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:31.962392   39129 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:11:31.962456   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.979928   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.980236   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.980251   39129 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:11:32.139976   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:32.140054   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.158886   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.159253   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.159279   39129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:11:32.307553   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:11:32.307588   39129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:11:32.307609   39129 ubuntu.go:190] setting up certificates
	I1211 00:11:32.307618   39129 provision.go:84] configureAuth start
	I1211 00:11:32.307677   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:32.326881   39129 provision.go:143] copyHostCerts
	I1211 00:11:32.326928   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.326981   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:11:32.326990   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.327094   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:11:32.327189   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327219   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:11:32.327229   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327259   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:11:32.327306   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327328   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:11:32.327337   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327369   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:11:32.327438   39129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:11:32.651770   39129 provision.go:177] copyRemoteCerts
	I1211 00:11:32.651883   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:11:32.651966   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.672496   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:32.786699   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 00:11:32.786771   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:11:32.804288   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 00:11:32.804348   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:11:32.822111   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 00:11:32.822172   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 00:11:32.839310   39129 provision.go:87] duration metric: took 531.679958ms to configureAuth
	I1211 00:11:32.839337   39129 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:11:32.839540   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:32.839656   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.857209   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.857554   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.857577   39129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:11:33.187304   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:11:33.187369   39129 machine.go:97] duration metric: took 1.392217167s to provisionDockerMachine
	I1211 00:11:33.187397   39129 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:11:33.187428   39129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:11:33.187507   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:11:33.187571   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.206116   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.310766   39129 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:11:33.313950   39129 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1211 00:11:33.313971   39129 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1211 00:11:33.313977   39129 command_runner.go:130] > VERSION_ID="12"
	I1211 00:11:33.313982   39129 command_runner.go:130] > VERSION="12 (bookworm)"
	I1211 00:11:33.313987   39129 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1211 00:11:33.313990   39129 command_runner.go:130] > ID=debian
	I1211 00:11:33.313995   39129 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1211 00:11:33.314000   39129 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1211 00:11:33.314006   39129 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1211 00:11:33.314074   39129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:11:33.314099   39129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:11:33.314110   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:11:33.314165   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:11:33.314254   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:11:33.314265   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /etc/ssl/certs/48752.pem
	I1211 00:11:33.314342   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:11:33.314349   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> /etc/test/nested/copy/4875/hosts
	I1211 00:11:33.314395   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:11:33.321833   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:33.338845   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:11:33.355788   39129 start.go:296] duration metric: took 168.358579ms for postStartSetup
	I1211 00:11:33.355933   39129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:11:33.355981   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.374136   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.483570   39129 command_runner.go:130] > 14%
	I1211 00:11:33.484133   39129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:11:33.488331   39129 command_runner.go:130] > 168G
	I1211 00:11:33.488874   39129 fix.go:56] duration metric: took 1.713448769s for fixHost
	I1211 00:11:33.488896   39129 start.go:83] releasing machines lock for "functional-786978", held for 1.713491657s
	I1211 00:11:33.488966   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:33.505970   39129 ssh_runner.go:195] Run: cat /version.json
	I1211 00:11:33.506004   39129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:11:33.506020   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.506067   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.524523   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.532688   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.712031   39129 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1211 00:11:33.714840   39129 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1211 00:11:33.715004   39129 ssh_runner.go:195] Run: systemctl --version
	I1211 00:11:33.720988   39129 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1211 00:11:33.721023   39129 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1211 00:11:33.721418   39129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:11:33.758142   39129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1211 00:11:33.762640   39129 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1211 00:11:33.762695   39129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:11:33.762759   39129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:11:33.770580   39129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:11:33.770605   39129 start.go:496] detecting cgroup driver to use...
	I1211 00:11:33.770636   39129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:11:33.770683   39129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:11:33.785751   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:11:33.798781   39129 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:11:33.798859   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:11:33.814594   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:11:33.828060   39129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:11:33.939426   39129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:11:34.063996   39129 docker.go:234] disabling docker service ...
	I1211 00:11:34.064079   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:11:34.088847   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:11:34.106427   39129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:11:34.233444   39129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:11:34.359250   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:11:34.371772   39129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:11:34.384768   39129 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1211 00:11:34.385910   39129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:11:34.386015   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.395329   39129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:11:34.395408   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.404378   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.412986   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.421585   39129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:11:34.429722   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.438361   39129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.447060   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.456153   39129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:11:34.462793   39129 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1211 00:11:34.463922   39129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:11:34.471096   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:34.576052   39129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:11:34.729272   39129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:11:34.729346   39129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:11:34.732930   39129 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1211 00:11:34.732954   39129 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1211 00:11:34.732962   39129 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1211 00:11:34.732969   39129 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:34.732973   39129 command_runner.go:130] > Access: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732985   39129 command_runner.go:130] > Modify: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732992   39129 command_runner.go:130] > Change: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732995   39129 command_runner.go:130] >  Birth: -
	I1211 00:11:34.733171   39129 start.go:564] Will wait 60s for crictl version
	I1211 00:11:34.733232   39129 ssh_runner.go:195] Run: which crictl
	I1211 00:11:34.736601   39129 command_runner.go:130] > /usr/local/bin/crictl
	I1211 00:11:34.736687   39129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:11:34.757793   39129 command_runner.go:130] > Version:  0.1.0
	I1211 00:11:34.757906   39129 command_runner.go:130] > RuntimeName:  cri-o
	I1211 00:11:34.757921   39129 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1211 00:11:34.757928   39129 command_runner.go:130] > RuntimeApiVersion:  v1
	I1211 00:11:34.760151   39129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:11:34.760230   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.787961   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.787986   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.787993   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.787998   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.788005   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.788009   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.788013   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.788019   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.788024   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.788028   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.788035   39129 command_runner.go:130] >      static
	I1211 00:11:34.788039   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.788043   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.788051   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.788055   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.788058   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.788069   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.788074   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.788080   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.788088   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.789644   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.815359   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.815385   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.815392   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.815397   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.815402   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.815425   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.815432   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.815439   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.815448   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.815452   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.815456   39129 command_runner.go:130] >      static
	I1211 00:11:34.815460   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.815468   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.815473   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.815480   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.815484   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.815491   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.815496   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.815505   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.815512   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.822208   39129 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:11:34.825193   39129 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:11:34.839960   39129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:11:34.843868   39129 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1211 00:11:34.843970   39129 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:11:34.844072   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:34.844127   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.876890   39129 command_runner.go:130] > {
	I1211 00:11:34.876911   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.876915   39129 command_runner.go:130] >     {
	I1211 00:11:34.876923   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.876928   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.876934   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.876937   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876941   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.876951   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.876963   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.876967   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876971   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.876979   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.876984   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.876987   39129 command_runner.go:130] >     },
	I1211 00:11:34.876991   39129 command_runner.go:130] >     {
	I1211 00:11:34.876997   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.877005   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877011   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.877014   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877018   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877026   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.877038   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.877042   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877046   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.877053   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877060   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877067   39129 command_runner.go:130] >     },
	I1211 00:11:34.877070   39129 command_runner.go:130] >     {
	I1211 00:11:34.877077   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.877089   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877094   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.877098   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877113   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877124   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.877132   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.877139   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877143   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.877147   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.877151   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877154   39129 command_runner.go:130] >     },
	I1211 00:11:34.877158   39129 command_runner.go:130] >     {
	I1211 00:11:34.877165   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.877171   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877176   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.877180   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877186   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877194   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.877204   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.877211   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877216   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.877219   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877224   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877234   39129 command_runner.go:130] >       },
	I1211 00:11:34.877242   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877253   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877257   39129 command_runner.go:130] >     },
	I1211 00:11:34.877260   39129 command_runner.go:130] >     {
	I1211 00:11:34.877267   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.877271   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877280   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.877287   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877291   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877299   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.877309   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.877317   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877326   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.877334   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877343   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877347   39129 command_runner.go:130] >       },
	I1211 00:11:34.877351   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877359   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877363   39129 command_runner.go:130] >     },
	I1211 00:11:34.877367   39129 command_runner.go:130] >     {
	I1211 00:11:34.877374   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.877381   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877387   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.877390   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877394   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877411   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.877420   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.877426   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877430   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.877434   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877438   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877441   39129 command_runner.go:130] >       },
	I1211 00:11:34.877445   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877450   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877455   39129 command_runner.go:130] >     },
	I1211 00:11:34.877459   39129 command_runner.go:130] >     {
	I1211 00:11:34.877473   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.877476   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.877490   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877494   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877502   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.877512   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.877516   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877520   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.877527   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877534   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877538   39129 command_runner.go:130] >     },
	I1211 00:11:34.877550   39129 command_runner.go:130] >     {
	I1211 00:11:34.877556   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.877560   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877565   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.877571   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877575   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877582   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.877602   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.877606   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877614   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.877618   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877630   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877633   39129 command_runner.go:130] >       },
	I1211 00:11:34.877636   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877640   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877646   39129 command_runner.go:130] >     },
	I1211 00:11:34.877649   39129 command_runner.go:130] >     {
	I1211 00:11:34.877656   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.877662   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877667   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.877670   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877674   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877681   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.877695   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.877699   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877703   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.877707   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877714   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.877717   39129 command_runner.go:130] >       },
	I1211 00:11:34.877721   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877732   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.877738   39129 command_runner.go:130] >     }
	I1211 00:11:34.877741   39129 command_runner.go:130] >   ]
	I1211 00:11:34.877744   39129 command_runner.go:130] > }
	I1211 00:11:34.877906   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.877920   39129 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:11:34.877980   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.904837   39129 command_runner.go:130] > {
	I1211 00:11:34.904873   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.904879   39129 command_runner.go:130] >     {
	I1211 00:11:34.904887   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.904893   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904899   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.904903   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904925   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.904940   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.904949   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.904958   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904962   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.904966   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.904971   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.904975   39129 command_runner.go:130] >     },
	I1211 00:11:34.904978   39129 command_runner.go:130] >     {
	I1211 00:11:34.904985   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.904989   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904999   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.905010   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905015   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905023   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.905032   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.905038   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905042   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.905046   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905054   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905064   39129 command_runner.go:130] >     },
	I1211 00:11:34.905068   39129 command_runner.go:130] >     {
	I1211 00:11:34.905075   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.905079   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905084   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.905090   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905095   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905103   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.905113   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.905121   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905126   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.905130   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.905134   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905143   39129 command_runner.go:130] >     },
	I1211 00:11:34.905146   39129 command_runner.go:130] >     {
	I1211 00:11:34.905153   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.905162   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905167   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.905170   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905175   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905182   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.905192   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.905195   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905199   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.905209   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905217   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905228   39129 command_runner.go:130] >       },
	I1211 00:11:34.905237   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905244   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905248   39129 command_runner.go:130] >     },
	I1211 00:11:34.905251   39129 command_runner.go:130] >     {
	I1211 00:11:34.905258   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.905262   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905267   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.905272   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905276   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905284   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.905295   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.905302   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905306   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.905310   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905315   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905322   39129 command_runner.go:130] >       },
	I1211 00:11:34.905326   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905330   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905334   39129 command_runner.go:130] >     },
	I1211 00:11:34.905337   39129 command_runner.go:130] >     {
	I1211 00:11:34.905351   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.905355   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905361   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.905368   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905378   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905391   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.905400   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.905408   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905413   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.905417   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905424   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905431   39129 command_runner.go:130] >       },
	I1211 00:11:34.905435   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905439   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905441   39129 command_runner.go:130] >     },
	I1211 00:11:34.905444   39129 command_runner.go:130] >     {
	I1211 00:11:34.905451   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.905457   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905463   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.905466   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905470   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.905492   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.905496   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905500   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.905509   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905513   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905516   39129 command_runner.go:130] >     },
	I1211 00:11:34.905519   39129 command_runner.go:130] >     {
	I1211 00:11:34.905526   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.905535   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905541   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.905544   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905548   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905556   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.905573   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.905577   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905581   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.905585   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905589   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905592   39129 command_runner.go:130] >       },
	I1211 00:11:34.905596   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905604   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905612   39129 command_runner.go:130] >     },
	I1211 00:11:34.905619   39129 command_runner.go:130] >     {
	I1211 00:11:34.905625   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.905629   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905634   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.905637   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905641   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905657   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.905665   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.905671   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905675   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.905679   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905683   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.905686   39129 command_runner.go:130] >       },
	I1211 00:11:34.905690   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905697   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.905700   39129 command_runner.go:130] >     }
	I1211 00:11:34.905703   39129 command_runner.go:130] >   ]
	I1211 00:11:34.905705   39129 command_runner.go:130] > }
	I1211 00:11:34.908324   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.908347   39129 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:11:34.908354   39129 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:11:34.908461   39129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:11:34.908543   39129 ssh_runner.go:195] Run: crio config
	I1211 00:11:34.971791   39129 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1211 00:11:34.971813   39129 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1211 00:11:34.971821   39129 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1211 00:11:34.971824   39129 command_runner.go:130] > #
	I1211 00:11:34.971832   39129 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1211 00:11:34.971839   39129 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1211 00:11:34.971846   39129 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1211 00:11:34.971853   39129 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1211 00:11:34.971857   39129 command_runner.go:130] > # reload'.
	I1211 00:11:34.971875   39129 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1211 00:11:34.971882   39129 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1211 00:11:34.971888   39129 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1211 00:11:34.971894   39129 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1211 00:11:34.971898   39129 command_runner.go:130] > [crio]
	I1211 00:11:34.971903   39129 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1211 00:11:34.971908   39129 command_runner.go:130] > # containers images, in this directory.
	I1211 00:11:34.972453   39129 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1211 00:11:34.972468   39129 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1211 00:11:34.973023   39129 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1211 00:11:34.973035   39129 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1211 00:11:34.973741   39129 command_runner.go:130] > # imagestore = ""
	I1211 00:11:34.973760   39129 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1211 00:11:34.973768   39129 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1211 00:11:34.973950   39129 command_runner.go:130] > # storage_driver = "overlay"
	I1211 00:11:34.973965   39129 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1211 00:11:34.973972   39129 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1211 00:11:34.974083   39129 command_runner.go:130] > # storage_option = [
	I1211 00:11:34.974240   39129 command_runner.go:130] > # ]
	I1211 00:11:34.974255   39129 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1211 00:11:34.974262   39129 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1211 00:11:34.974433   39129 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1211 00:11:34.974477   39129 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1211 00:11:34.974487   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1211 00:11:34.974492   39129 command_runner.go:130] > # always happen on a node reboot
	I1211 00:11:34.974707   39129 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1211 00:11:34.974755   39129 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1211 00:11:34.974769   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1211 00:11:34.974774   39129 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1211 00:11:34.974951   39129 command_runner.go:130] > # version_file_persist = ""
	I1211 00:11:34.974999   39129 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1211 00:11:34.975014   39129 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1211 00:11:34.975286   39129 command_runner.go:130] > # internal_wipe = true
	I1211 00:11:34.975303   39129 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1211 00:11:34.975309   39129 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1211 00:11:34.975533   39129 command_runner.go:130] > # internal_repair = true
	I1211 00:11:34.975547   39129 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1211 00:11:34.975554   39129 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1211 00:11:34.975560   39129 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1211 00:11:34.975800   39129 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1211 00:11:34.975813   39129 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1211 00:11:34.975817   39129 command_runner.go:130] > [crio.api]
	I1211 00:11:34.975838   39129 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1211 00:11:34.976047   39129 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1211 00:11:34.976068   39129 command_runner.go:130] > # IP address on which the stream server will listen.
	I1211 00:11:34.976289   39129 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1211 00:11:34.976305   39129 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1211 00:11:34.976322   39129 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1211 00:11:34.976522   39129 command_runner.go:130] > # stream_port = "0"
	I1211 00:11:34.976537   39129 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1211 00:11:34.976743   39129 command_runner.go:130] > # stream_enable_tls = false
	I1211 00:11:34.976759   39129 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1211 00:11:34.976966   39129 command_runner.go:130] > # stream_idle_timeout = ""
	I1211 00:11:34.976981   39129 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1211 00:11:34.976987   39129 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977102   39129 command_runner.go:130] > # stream_tls_cert = ""
	I1211 00:11:34.977116   39129 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1211 00:11:34.977122   39129 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977375   39129 command_runner.go:130] > # stream_tls_key = ""
	I1211 00:11:34.977408   39129 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1211 00:11:34.977433   39129 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1211 00:11:34.977440   39129 command_runner.go:130] > # automatically pick up the changes.
	I1211 00:11:34.977571   39129 command_runner.go:130] > # stream_tls_ca = ""
	I1211 00:11:34.977641   39129 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977779   39129 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1211 00:11:34.977797   39129 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977991   39129 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1211 00:11:34.978007   39129 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1211 00:11:34.978040   39129 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1211 00:11:34.978056   39129 command_runner.go:130] > [crio.runtime]
	I1211 00:11:34.978069   39129 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1211 00:11:34.978076   39129 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1211 00:11:34.978080   39129 command_runner.go:130] > # "nofile=1024:2048"
	I1211 00:11:34.978086   39129 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1211 00:11:34.978208   39129 command_runner.go:130] > # default_ulimits = [
	I1211 00:11:34.978352   39129 command_runner.go:130] > # ]
	I1211 00:11:34.978369   39129 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1211 00:11:34.978551   39129 command_runner.go:130] > # no_pivot = false
	I1211 00:11:34.978566   39129 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1211 00:11:34.978572   39129 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1211 00:11:34.978723   39129 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1211 00:11:34.978739   39129 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1211 00:11:34.978744   39129 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1211 00:11:34.978775   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.978921   39129 command_runner.go:130] > # conmon = ""
	I1211 00:11:34.978933   39129 command_runner.go:130] > # Cgroup setting for conmon
	I1211 00:11:34.978941   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1211 00:11:34.979286   39129 command_runner.go:130] > conmon_cgroup = "pod"
	I1211 00:11:34.979301   39129 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1211 00:11:34.979307   39129 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1211 00:11:34.979343   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.979348   39129 command_runner.go:130] > # conmon_env = [
	I1211 00:11:34.979496   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979512   39129 command_runner.go:130] > # Additional environment variables to set for all the
	I1211 00:11:34.979518   39129 command_runner.go:130] > # containers. These are overridden if set in the
	I1211 00:11:34.979524   39129 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1211 00:11:34.979552   39129 command_runner.go:130] > # default_env = [
	I1211 00:11:34.979707   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979725   39129 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1211 00:11:34.979734   39129 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1211 00:11:34.979983   39129 command_runner.go:130] > # selinux = false
	I1211 00:11:34.980000   39129 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1211 00:11:34.980009   39129 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1211 00:11:34.980015   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980366   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.980414   39129 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1211 00:11:34.980429   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980434   39129 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1211 00:11:34.980447   39129 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1211 00:11:34.980453   39129 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1211 00:11:34.980464   39129 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1211 00:11:34.980471   39129 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1211 00:11:34.980493   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980499   39129 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1211 00:11:34.980514   39129 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1211 00:11:34.980524   39129 command_runner.go:130] > # the cgroup blockio controller.
	I1211 00:11:34.980678   39129 command_runner.go:130] > # blockio_config_file = ""
	I1211 00:11:34.980713   39129 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1211 00:11:34.980723   39129 command_runner.go:130] > # blockio parameters.
	I1211 00:11:34.980981   39129 command_runner.go:130] > # blockio_reload = false
	I1211 00:11:34.980995   39129 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1211 00:11:34.980999   39129 command_runner.go:130] > # irqbalance daemon.
	I1211 00:11:34.981198   39129 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1211 00:11:34.981209   39129 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1211 00:11:34.981217   39129 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1211 00:11:34.981265   39129 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1211 00:11:34.981385   39129 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1211 00:11:34.981396   39129 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1211 00:11:34.981402   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.981515   39129 command_runner.go:130] > # rdt_config_file = ""
	I1211 00:11:34.981525   39129 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1211 00:11:34.981657   39129 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1211 00:11:34.981668   39129 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1211 00:11:34.981795   39129 command_runner.go:130] > # separate_pull_cgroup = ""
	I1211 00:11:34.981809   39129 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1211 00:11:34.981816   39129 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1211 00:11:34.981820   39129 command_runner.go:130] > # will be added.
	I1211 00:11:34.981926   39129 command_runner.go:130] > # default_capabilities = [
	I1211 00:11:34.982055   39129 command_runner.go:130] > # 	"CHOWN",
	I1211 00:11:34.982151   39129 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1211 00:11:34.982256   39129 command_runner.go:130] > # 	"FSETID",
	I1211 00:11:34.982350   39129 command_runner.go:130] > # 	"FOWNER",
	I1211 00:11:34.982451   39129 command_runner.go:130] > # 	"SETGID",
	I1211 00:11:34.982543   39129 command_runner.go:130] > # 	"SETUID",
	I1211 00:11:34.982687   39129 command_runner.go:130] > # 	"SETPCAP",
	I1211 00:11:34.982695   39129 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1211 00:11:34.982819   39129 command_runner.go:130] > # 	"KILL",
	I1211 00:11:34.982949   39129 command_runner.go:130] > # ]
	I1211 00:11:34.982960   39129 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1211 00:11:34.982993   39129 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1211 00:11:34.983107   39129 command_runner.go:130] > # add_inheritable_capabilities = false
	I1211 00:11:34.983118   39129 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1211 00:11:34.983132   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983136   39129 command_runner.go:130] > default_sysctls = [
	I1211 00:11:34.983272   39129 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1211 00:11:34.983279   39129 command_runner.go:130] > ]
	I1211 00:11:34.983285   39129 command_runner.go:130] > # List of devices on the host that a
	I1211 00:11:34.983300   39129 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1211 00:11:34.983304   39129 command_runner.go:130] > # allowed_devices = [
	I1211 00:11:34.983428   39129 command_runner.go:130] > # 	"/dev/fuse",
	I1211 00:11:34.983527   39129 command_runner.go:130] > # 	"/dev/net/tun",
	I1211 00:11:34.983650   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983660   39129 command_runner.go:130] > # List of additional devices. specified as
	I1211 00:11:34.983668   39129 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1211 00:11:34.983680   39129 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1211 00:11:34.983687   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983813   39129 command_runner.go:130] > # additional_devices = [
	I1211 00:11:34.983820   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983826   39129 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1211 00:11:34.983923   39129 command_runner.go:130] > # cdi_spec_dirs = [
	I1211 00:11:34.984053   39129 command_runner.go:130] > # 	"/etc/cdi",
	I1211 00:11:34.984060   39129 command_runner.go:130] > # 	"/var/run/cdi",
	I1211 00:11:34.984160   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984177   39129 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1211 00:11:34.984184   39129 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1211 00:11:34.984195   39129 command_runner.go:130] > # Defaults to false.
	I1211 00:11:34.984334   39129 command_runner.go:130] > # device_ownership_from_security_context = false
	I1211 00:11:34.984345   39129 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1211 00:11:34.984355   39129 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1211 00:11:34.984488   39129 command_runner.go:130] > # hooks_dir = [
	I1211 00:11:34.984640   39129 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1211 00:11:34.984647   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984653   39129 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1211 00:11:34.984667   39129 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1211 00:11:34.984672   39129 command_runner.go:130] > # its default mounts from the following two files:
	I1211 00:11:34.984675   39129 command_runner.go:130] > #
	I1211 00:11:34.984681   39129 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1211 00:11:34.984694   39129 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1211 00:11:34.984700   39129 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1211 00:11:34.984703   39129 command_runner.go:130] > #
	I1211 00:11:34.984710   39129 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1211 00:11:34.984716   39129 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1211 00:11:34.984722   39129 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1211 00:11:34.984727   39129 command_runner.go:130] > #      only add mounts it finds in this file.
	I1211 00:11:34.984729   39129 command_runner.go:130] > #
	I1211 00:11:34.984883   39129 command_runner.go:130] > # default_mounts_file = ""
	I1211 00:11:34.984900   39129 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1211 00:11:34.984908   39129 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1211 00:11:34.985051   39129 command_runner.go:130] > # pids_limit = -1
	I1211 00:11:34.985062   39129 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1211 00:11:34.985075   39129 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1211 00:11:34.985083   39129 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1211 00:11:34.985091   39129 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1211 00:11:34.985222   39129 command_runner.go:130] > # log_size_max = -1
	I1211 00:11:34.985233   39129 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1211 00:11:34.985372   39129 command_runner.go:130] > # log_to_journald = false
	I1211 00:11:34.985382   39129 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1211 00:11:34.985404   39129 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1211 00:11:34.985411   39129 command_runner.go:130] > # Path to directory for container attach sockets.
	I1211 00:11:34.985416   39129 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1211 00:11:34.985422   39129 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1211 00:11:34.985425   39129 command_runner.go:130] > # bind_mount_prefix = ""
	I1211 00:11:34.985434   39129 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1211 00:11:34.985569   39129 command_runner.go:130] > # read_only = false
	I1211 00:11:34.985580   39129 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1211 00:11:34.985587   39129 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1211 00:11:34.985601   39129 command_runner.go:130] > # live configuration reload.
	I1211 00:11:34.985605   39129 command_runner.go:130] > # log_level = "info"
	I1211 00:11:34.985611   39129 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1211 00:11:34.985616   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.985619   39129 command_runner.go:130] > # log_filter = ""
	I1211 00:11:34.985626   39129 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985632   39129 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1211 00:11:34.985635   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985643   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985647   39129 command_runner.go:130] > # uid_mappings = ""
	I1211 00:11:34.985654   39129 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985660   39129 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1211 00:11:34.985664   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985672   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985681   39129 command_runner.go:130] > # gid_mappings = ""
	I1211 00:11:34.985688   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1211 00:11:34.985694   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985700   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985708   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985712   39129 command_runner.go:130] > # minimum_mappable_uid = -1
	I1211 00:11:34.985718   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1211 00:11:34.985723   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985729   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985737   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985741   39129 command_runner.go:130] > # minimum_mappable_gid = -1
	I1211 00:11:34.985747   39129 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1211 00:11:34.985753   39129 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1211 00:11:34.985759   39129 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1211 00:11:34.985975   39129 command_runner.go:130] > # ctr_stop_timeout = 30
	I1211 00:11:34.985988   39129 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1211 00:11:34.985994   39129 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1211 00:11:34.985999   39129 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1211 00:11:34.986004   39129 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1211 00:11:34.986008   39129 command_runner.go:130] > # drop_infra_ctr = true
	I1211 00:11:34.986014   39129 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1211 00:11:34.986019   39129 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1211 00:11:34.986029   39129 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1211 00:11:34.986033   39129 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1211 00:11:34.986040   39129 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1211 00:11:34.986046   39129 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1211 00:11:34.986051   39129 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1211 00:11:34.986057   39129 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1211 00:11:34.986060   39129 command_runner.go:130] > # shared_cpuset = ""
	I1211 00:11:34.986066   39129 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1211 00:11:34.986071   39129 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1211 00:11:34.986075   39129 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1211 00:11:34.986082   39129 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1211 00:11:34.986085   39129 command_runner.go:130] > # pinns_path = ""
	I1211 00:11:34.986091   39129 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1211 00:11:34.986098   39129 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1211 00:11:34.986101   39129 command_runner.go:130] > # enable_criu_support = true
	I1211 00:11:34.986107   39129 command_runner.go:130] > # Enable/disable the generation of the container,
	I1211 00:11:34.986112   39129 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1211 00:11:34.986116   39129 command_runner.go:130] > # enable_pod_events = false
	I1211 00:11:34.986122   39129 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1211 00:11:34.986131   39129 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1211 00:11:34.986135   39129 command_runner.go:130] > # default_runtime = "crun"
	I1211 00:11:34.986140   39129 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1211 00:11:34.986148   39129 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1211 00:11:34.986159   39129 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1211 00:11:34.986164   39129 command_runner.go:130] > # creation as a file is not desired either.
	I1211 00:11:34.986172   39129 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1211 00:11:34.986177   39129 command_runner.go:130] > # the hostname is being managed dynamically.
	I1211 00:11:34.986181   39129 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1211 00:11:34.986185   39129 command_runner.go:130] > # ]
	I1211 00:11:34.986192   39129 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1211 00:11:34.986198   39129 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1211 00:11:34.986205   39129 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1211 00:11:34.986210   39129 command_runner.go:130] > # Each entry in the table should follow the format:
	I1211 00:11:34.986212   39129 command_runner.go:130] > #
	I1211 00:11:34.986217   39129 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1211 00:11:34.986221   39129 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1211 00:11:34.986226   39129 command_runner.go:130] > # runtime_type = "oci"
	I1211 00:11:34.986231   39129 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1211 00:11:34.986235   39129 command_runner.go:130] > # inherit_default_runtime = false
	I1211 00:11:34.986240   39129 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1211 00:11:34.986244   39129 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1211 00:11:34.986248   39129 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1211 00:11:34.986251   39129 command_runner.go:130] > # monitor_env = []
	I1211 00:11:34.986256   39129 command_runner.go:130] > # privileged_without_host_devices = false
	I1211 00:11:34.986259   39129 command_runner.go:130] > # allowed_annotations = []
	I1211 00:11:34.986265   39129 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1211 00:11:34.986268   39129 command_runner.go:130] > # no_sync_log = false
	I1211 00:11:34.986272   39129 command_runner.go:130] > # default_annotations = {}
	I1211 00:11:34.986276   39129 command_runner.go:130] > # stream_websockets = false
	I1211 00:11:34.986279   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.986309   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.986315   39129 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1211 00:11:34.986324   39129 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1211 00:11:34.986330   39129 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1211 00:11:34.986337   39129 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1211 00:11:34.986340   39129 command_runner.go:130] > #   in $PATH.
	I1211 00:11:34.986346   39129 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1211 00:11:34.986350   39129 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1211 00:11:34.986356   39129 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1211 00:11:34.986359   39129 command_runner.go:130] > #   state.
	I1211 00:11:34.986366   39129 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1211 00:11:34.986375   39129 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1211 00:11:34.986381   39129 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1211 00:11:34.986387   39129 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1211 00:11:34.986392   39129 command_runner.go:130] > #   the values from the default runtime on load time.
	I1211 00:11:34.986398   39129 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1211 00:11:34.986404   39129 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1211 00:11:34.986410   39129 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1211 00:11:34.986417   39129 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1211 00:11:34.986421   39129 command_runner.go:130] > #   The currently recognized values are:
	I1211 00:11:34.986428   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1211 00:11:34.986435   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1211 00:11:34.986440   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1211 00:11:34.986446   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1211 00:11:34.986455   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1211 00:11:34.986462   39129 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1211 00:11:34.986469   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1211 00:11:34.986475   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1211 00:11:34.986481   39129 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1211 00:11:34.986487   39129 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1211 00:11:34.986494   39129 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1211 00:11:34.986500   39129 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1211 00:11:34.986505   39129 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1211 00:11:34.986511   39129 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1211 00:11:34.986517   39129 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1211 00:11:34.986528   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1211 00:11:34.986534   39129 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1211 00:11:34.986538   39129 command_runner.go:130] > #   deprecated option "conmon".
	I1211 00:11:34.986545   39129 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1211 00:11:34.986550   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1211 00:11:34.986556   39129 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1211 00:11:34.986561   39129 command_runner.go:130] > #   should be moved to the container's cgroup
	I1211 00:11:34.986567   39129 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1211 00:11:34.986572   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1211 00:11:34.986579   39129 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1211 00:11:34.986583   39129 command_runner.go:130] > #   conmon-rs by using:
	I1211 00:11:34.986591   39129 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1211 00:11:34.986598   39129 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1211 00:11:34.986606   39129 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1211 00:11:34.986613   39129 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1211 00:11:34.986618   39129 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1211 00:11:34.986625   39129 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1211 00:11:34.986633   39129 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1211 00:11:34.986641   39129 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1211 00:11:34.986651   39129 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1211 00:11:34.986658   39129 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1211 00:11:34.986662   39129 command_runner.go:130] > #   when a machine crash happens.
	I1211 00:11:34.986669   39129 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1211 00:11:34.986677   39129 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1211 00:11:34.986685   39129 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1211 00:11:34.986689   39129 command_runner.go:130] > #   seccomp profile for the runtime.
	I1211 00:11:34.986695   39129 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1211 00:11:34.986702   39129 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1211 00:11:34.986704   39129 command_runner.go:130] > #
	I1211 00:11:34.986708   39129 command_runner.go:130] > # Using the seccomp notifier feature:
	I1211 00:11:34.986711   39129 command_runner.go:130] > #
	I1211 00:11:34.986717   39129 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1211 00:11:34.986724   39129 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1211 00:11:34.986729   39129 command_runner.go:130] > #
	I1211 00:11:34.986739   39129 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1211 00:11:34.986745   39129 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1211 00:11:34.986748   39129 command_runner.go:130] > #
	I1211 00:11:34.986754   39129 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1211 00:11:34.986757   39129 command_runner.go:130] > # feature.
	I1211 00:11:34.986760   39129 command_runner.go:130] > #
	I1211 00:11:34.986766   39129 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1211 00:11:34.986772   39129 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1211 00:11:34.986778   39129 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1211 00:11:34.986784   39129 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1211 00:11:34.986790   39129 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1211 00:11:34.986792   39129 command_runner.go:130] > #
	I1211 00:11:34.986799   39129 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1211 00:11:34.986805   39129 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1211 00:11:34.986808   39129 command_runner.go:130] > #
	I1211 00:11:34.986814   39129 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1211 00:11:34.986820   39129 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1211 00:11:34.986822   39129 command_runner.go:130] > #
	I1211 00:11:34.986828   39129 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1211 00:11:34.986833   39129 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1211 00:11:34.986837   39129 command_runner.go:130] > # limitation.
	I1211 00:11:34.986842   39129 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1211 00:11:34.986846   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1211 00:11:34.986850   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986853   39129 command_runner.go:130] > runtime_root = "/run/crun"
	I1211 00:11:34.986857   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986860   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986864   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.986868   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.986872   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.986876   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.986880   39129 command_runner.go:130] > allowed_annotations = [
	I1211 00:11:34.986887   39129 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1211 00:11:34.986889   39129 command_runner.go:130] > ]
	I1211 00:11:34.986894   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.986898   39129 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1211 00:11:34.986902   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1211 00:11:34.986906   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986909   39129 command_runner.go:130] > runtime_root = "/run/runc"
	I1211 00:11:34.986913   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986917   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986921   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.987106   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.987121   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.987127   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.987132   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.987139   39129 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1211 00:11:34.987147   39129 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1211 00:11:34.987154   39129 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1211 00:11:34.987166   39129 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1211 00:11:34.987177   39129 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1211 00:11:34.987187   39129 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1211 00:11:34.987194   39129 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1211 00:11:34.987200   39129 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1211 00:11:34.987209   39129 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1211 00:11:34.987218   39129 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1211 00:11:34.987224   39129 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1211 00:11:34.987231   39129 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1211 00:11:34.987235   39129 command_runner.go:130] > # Example:
	I1211 00:11:34.987241   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1211 00:11:34.987246   39129 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1211 00:11:34.987251   39129 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1211 00:11:34.987255   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1211 00:11:34.987258   39129 command_runner.go:130] > # cpuset = "0-1"
	I1211 00:11:34.987262   39129 command_runner.go:130] > # cpushares = "5"
	I1211 00:11:34.987269   39129 command_runner.go:130] > # cpuquota = "1000"
	I1211 00:11:34.987273   39129 command_runner.go:130] > # cpuperiod = "100000"
	I1211 00:11:34.987277   39129 command_runner.go:130] > # cpulimit = "35"
	I1211 00:11:34.987280   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.987284   39129 command_runner.go:130] > # The workload name is workload-type.
	I1211 00:11:34.987292   39129 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1211 00:11:34.987298   39129 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1211 00:11:34.987303   39129 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1211 00:11:34.987311   39129 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1211 00:11:34.987317   39129 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1211 00:11:34.987322   39129 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1211 00:11:34.987328   39129 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1211 00:11:34.987332   39129 command_runner.go:130] > # Default value is set to true
	I1211 00:11:34.987336   39129 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1211 00:11:34.987342   39129 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1211 00:11:34.987346   39129 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1211 00:11:34.987350   39129 command_runner.go:130] > # Default value is set to 'false'
	I1211 00:11:34.987355   39129 command_runner.go:130] > # disable_hostport_mapping = false
	I1211 00:11:34.987361   39129 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1211 00:11:34.987369   39129 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1211 00:11:34.987372   39129 command_runner.go:130] > # timezone = ""
	I1211 00:11:34.987379   39129 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1211 00:11:34.987382   39129 command_runner.go:130] > #
	I1211 00:11:34.987387   39129 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1211 00:11:34.987393   39129 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1211 00:11:34.987396   39129 command_runner.go:130] > [crio.image]
	I1211 00:11:34.987402   39129 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1211 00:11:34.987407   39129 command_runner.go:130] > # default_transport = "docker://"
	I1211 00:11:34.987413   39129 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1211 00:11:34.987419   39129 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987423   39129 command_runner.go:130] > # global_auth_file = ""
	I1211 00:11:34.987428   39129 command_runner.go:130] > # The image used to instantiate infra containers.
	I1211 00:11:34.987432   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987442   39129 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.987448   39129 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1211 00:11:34.987454   39129 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987458   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987463   39129 command_runner.go:130] > # pause_image_auth_file = ""
	I1211 00:11:34.987468   39129 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1211 00:11:34.987478   39129 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1211 00:11:34.987484   39129 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1211 00:11:34.987489   39129 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1211 00:11:34.987505   39129 command_runner.go:130] > # pause_command = "/pause"
	I1211 00:11:34.987511   39129 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1211 00:11:34.987518   39129 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1211 00:11:34.987524   39129 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1211 00:11:34.987530   39129 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1211 00:11:34.987536   39129 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1211 00:11:34.987542   39129 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1211 00:11:34.987545   39129 command_runner.go:130] > # pinned_images = [
	I1211 00:11:34.987549   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987555   39129 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1211 00:11:34.987561   39129 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1211 00:11:34.987567   39129 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1211 00:11:34.987574   39129 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1211 00:11:34.987579   39129 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1211 00:11:34.987584   39129 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1211 00:11:34.987589   39129 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1211 00:11:34.987596   39129 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1211 00:11:34.987602   39129 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1211 00:11:34.987608   39129 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1211 00:11:34.987614   39129 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1211 00:11:34.987618   39129 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1211 00:11:34.987624   39129 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1211 00:11:34.987631   39129 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1211 00:11:34.987634   39129 command_runner.go:130] > # changing them here.
	I1211 00:11:34.987643   39129 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1211 00:11:34.987646   39129 command_runner.go:130] > # insecure_registries = [
	I1211 00:11:34.987651   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987657   39129 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1211 00:11:34.987662   39129 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1211 00:11:34.987666   39129 command_runner.go:130] > # image_volumes = "mkdir"
	I1211 00:11:34.987671   39129 command_runner.go:130] > # Temporary directory to use for storing big files
	I1211 00:11:34.987675   39129 command_runner.go:130] > # big_files_temporary_dir = ""
	I1211 00:11:34.987681   39129 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1211 00:11:34.987688   39129 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1211 00:11:34.987692   39129 command_runner.go:130] > # auto_reload_registries = false
	I1211 00:11:34.987698   39129 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1211 00:11:34.987706   39129 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1211 00:11:34.987711   39129 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1211 00:11:34.987715   39129 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1211 00:11:34.987719   39129 command_runner.go:130] > # The mode of short name resolution.
	I1211 00:11:34.987726   39129 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1211 00:11:34.987734   39129 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1211 00:11:34.987739   39129 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1211 00:11:34.987743   39129 command_runner.go:130] > # short_name_mode = "enforcing"
	I1211 00:11:34.987749   39129 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1211 00:11:34.987754   39129 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1211 00:11:34.987763   39129 command_runner.go:130] > # oci_artifact_mount_support = true
	I1211 00:11:34.987770   39129 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1211 00:11:34.987773   39129 command_runner.go:130] > # CNI plugins.
	I1211 00:11:34.987776   39129 command_runner.go:130] > [crio.network]
	I1211 00:11:34.987782   39129 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1211 00:11:34.987787   39129 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1211 00:11:34.987791   39129 command_runner.go:130] > # cni_default_network = ""
	I1211 00:11:34.987797   39129 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1211 00:11:34.987801   39129 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1211 00:11:34.987806   39129 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1211 00:11:34.987809   39129 command_runner.go:130] > # plugin_dirs = [
	I1211 00:11:34.987816   39129 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1211 00:11:34.987819   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987823   39129 command_runner.go:130] > # List of included pod metrics.
	I1211 00:11:34.987827   39129 command_runner.go:130] > # included_pod_metrics = [
	I1211 00:11:34.987830   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987837   39129 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1211 00:11:34.987840   39129 command_runner.go:130] > [crio.metrics]
	I1211 00:11:34.987845   39129 command_runner.go:130] > # Globally enable or disable metrics support.
	I1211 00:11:34.987849   39129 command_runner.go:130] > # enable_metrics = false
	I1211 00:11:34.987853   39129 command_runner.go:130] > # Specify enabled metrics collectors.
	I1211 00:11:34.987859   39129 command_runner.go:130] > # Per default all metrics are enabled.
	I1211 00:11:34.987865   39129 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1211 00:11:34.987871   39129 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1211 00:11:34.987877   39129 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1211 00:11:34.987880   39129 command_runner.go:130] > # metrics_collectors = [
	I1211 00:11:34.987884   39129 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1211 00:11:34.987888   39129 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1211 00:11:34.987892   39129 command_runner.go:130] > # 	"containers_oom_total",
	I1211 00:11:34.987895   39129 command_runner.go:130] > # 	"processes_defunct",
	I1211 00:11:34.987900   39129 command_runner.go:130] > # 	"operations_total",
	I1211 00:11:34.987904   39129 command_runner.go:130] > # 	"operations_latency_seconds",
	I1211 00:11:34.987908   39129 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1211 00:11:34.987912   39129 command_runner.go:130] > # 	"operations_errors_total",
	I1211 00:11:34.987916   39129 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1211 00:11:34.987920   39129 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1211 00:11:34.987924   39129 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1211 00:11:34.987928   39129 command_runner.go:130] > # 	"image_pulls_success_total",
	I1211 00:11:34.987932   39129 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1211 00:11:34.987936   39129 command_runner.go:130] > # 	"containers_oom_count_total",
	I1211 00:11:34.987942   39129 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1211 00:11:34.987946   39129 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1211 00:11:34.987950   39129 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1211 00:11:34.987953   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987962   39129 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1211 00:11:34.987967   39129 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1211 00:11:34.987972   39129 command_runner.go:130] > # The port on which the metrics server will listen.
	I1211 00:11:34.987975   39129 command_runner.go:130] > # metrics_port = 9090
	I1211 00:11:34.987980   39129 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1211 00:11:34.987984   39129 command_runner.go:130] > # metrics_socket = ""
	I1211 00:11:34.987989   39129 command_runner.go:130] > # The certificate for the secure metrics server.
	I1211 00:11:34.987994   39129 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1211 00:11:34.988001   39129 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1211 00:11:34.988005   39129 command_runner.go:130] > # certificate on any modification event.
	I1211 00:11:34.988008   39129 command_runner.go:130] > # metrics_cert = ""
	I1211 00:11:34.988013   39129 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1211 00:11:34.988018   39129 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1211 00:11:34.988021   39129 command_runner.go:130] > # metrics_key = ""
	I1211 00:11:34.988026   39129 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1211 00:11:34.988030   39129 command_runner.go:130] > [crio.tracing]
	I1211 00:11:34.988035   39129 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1211 00:11:34.988038   39129 command_runner.go:130] > # enable_tracing = false
	I1211 00:11:34.988044   39129 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1211 00:11:34.988050   39129 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1211 00:11:34.988056   39129 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1211 00:11:34.988061   39129 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1211 00:11:34.988064   39129 command_runner.go:130] > # CRI-O NRI configuration.
	I1211 00:11:34.988067   39129 command_runner.go:130] > [crio.nri]
	I1211 00:11:34.988071   39129 command_runner.go:130] > # Globally enable or disable NRI.
	I1211 00:11:34.988075   39129 command_runner.go:130] > # enable_nri = true
	I1211 00:11:34.988079   39129 command_runner.go:130] > # NRI socket to listen on.
	I1211 00:11:34.988083   39129 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1211 00:11:34.988087   39129 command_runner.go:130] > # NRI plugin directory to use.
	I1211 00:11:34.988091   39129 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1211 00:11:34.988095   39129 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1211 00:11:34.988100   39129 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1211 00:11:34.988108   39129 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1211 00:11:34.988171   39129 command_runner.go:130] > # nri_disable_connections = false
	I1211 00:11:34.988177   39129 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1211 00:11:34.988182   39129 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1211 00:11:34.988186   39129 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1211 00:11:34.988190   39129 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1211 00:11:34.988194   39129 command_runner.go:130] > # NRI default validator configuration.
	I1211 00:11:34.988201   39129 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1211 00:11:34.988207   39129 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1211 00:11:34.988211   39129 command_runner.go:130] > # can be restricted/rejected:
	I1211 00:11:34.988215   39129 command_runner.go:130] > # - OCI hook injection
	I1211 00:11:34.988220   39129 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1211 00:11:34.988225   39129 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1211 00:11:34.988229   39129 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1211 00:11:34.988233   39129 command_runner.go:130] > # - adjustment of linux namespaces
	I1211 00:11:34.988240   39129 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1211 00:11:34.988246   39129 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1211 00:11:34.988251   39129 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1211 00:11:34.988254   39129 command_runner.go:130] > #
	I1211 00:11:34.988258   39129 command_runner.go:130] > # [crio.nri.default_validator]
	I1211 00:11:34.988262   39129 command_runner.go:130] > # nri_enable_default_validator = false
	I1211 00:11:34.988267   39129 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1211 00:11:34.988272   39129 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1211 00:11:34.988277   39129 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1211 00:11:34.988282   39129 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1211 00:11:34.988287   39129 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1211 00:11:34.988291   39129 command_runner.go:130] > # nri_validator_required_plugins = [
	I1211 00:11:34.988294   39129 command_runner.go:130] > # ]
	I1211 00:11:34.988299   39129 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1211 00:11:34.988306   39129 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1211 00:11:34.988309   39129 command_runner.go:130] > [crio.stats]
	I1211 00:11:34.988316   39129 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1211 00:11:34.988321   39129 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1211 00:11:34.988324   39129 command_runner.go:130] > # stats_collection_period = 0
	I1211 00:11:34.988334   39129 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1211 00:11:34.988341   39129 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1211 00:11:34.988345   39129 command_runner.go:130] > # collection_period = 0
	I1211 00:11:34.988741   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943588402Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1211 00:11:34.988759   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943910852Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1211 00:11:34.988775   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944105801Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1211 00:11:34.988788   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944281599Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1211 00:11:34.988804   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944534263Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.988813   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944919976Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1211 00:11:34.988827   39129 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1211 00:11:34.988906   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:34.988923   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:34.988942   39129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:11:34.988966   39129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:11:34.989098   39129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:11:34.989171   39129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:11:34.996103   39129 command_runner.go:130] > kubeadm
	I1211 00:11:34.996124   39129 command_runner.go:130] > kubectl
	I1211 00:11:34.996130   39129 command_runner.go:130] > kubelet
	I1211 00:11:34.996965   39129 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:11:34.997027   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:11:35.004524   39129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:11:35.022259   39129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:11:35.035877   39129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1211 00:11:35.049665   39129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:11:35.053270   39129 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1211 00:11:35.053410   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:35.173051   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:35.663593   39129 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:11:35.663611   39129 certs.go:195] generating shared ca certs ...
	I1211 00:11:35.663626   39129 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:35.663843   39129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:11:35.663918   39129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:11:35.664081   39129 certs.go:257] generating profile certs ...
	I1211 00:11:35.664282   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:11:35.664361   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:11:35.664489   39129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:11:35.664502   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 00:11:35.664555   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 00:11:35.664574   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 00:11:35.664591   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 00:11:35.664636   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 00:11:35.664653   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 00:11:35.664664   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 00:11:35.664675   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 00:11:35.664773   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:11:35.664811   39129 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:11:35.664825   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:11:35.664885   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:11:35.664944   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:11:35.664975   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:11:35.665087   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:35.665126   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem -> /usr/share/ca-certificates/4875.pem
	I1211 00:11:35.665138   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.665177   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.666144   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:11:35.692413   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:11:35.716263   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:11:35.735120   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:11:35.753386   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:11:35.771269   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:11:35.789331   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:11:35.806153   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:11:35.823663   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:11:35.840043   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:11:35.857281   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:11:35.874656   39129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:11:35.887595   39129 ssh_runner.go:195] Run: openssl version
	I1211 00:11:35.893373   39129 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1211 00:11:35.893766   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.901331   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:11:35.908770   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912293   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912332   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912381   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.953295   39129 command_runner.go:130] > 3ec20f2e
	I1211 00:11:35.953382   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:11:35.960497   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.967487   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:11:35.974778   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978822   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978856   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978928   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:36.019575   39129 command_runner.go:130] > b5213941
	I1211 00:11:36.020060   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:11:36.028538   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.036748   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:11:36.045277   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049492   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049553   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049672   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.092814   39129 command_runner.go:130] > 51391683
	I1211 00:11:36.093356   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:11:36.101223   39129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105165   39129 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105191   39129 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1211 00:11:36.105198   39129 command_runner.go:130] > Device: 259,1	Inode: 1312480     Links: 1
	I1211 00:11:36.105205   39129 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:36.105212   39129 command_runner.go:130] > Access: 2025-12-11 00:07:28.485872476 +0000
	I1211 00:11:36.105217   39129 command_runner.go:130] > Modify: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105222   39129 command_runner.go:130] > Change: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105228   39129 command_runner.go:130] >  Birth: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105288   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:11:36.146158   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.146663   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:11:36.187479   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.187576   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:11:36.228130   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.228568   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:11:36.269072   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.269532   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:11:36.310317   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.310832   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:11:36.353606   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.354067   39129 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:36.354163   39129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:11:36.354246   39129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:11:36.382480   39129 cri.go:89] found id: ""
	I1211 00:11:36.382557   39129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:11:36.389756   39129 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1211 00:11:36.389777   39129 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1211 00:11:36.389784   39129 command_runner.go:130] > /var/lib/minikube/etcd:
	I1211 00:11:36.390708   39129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:11:36.390737   39129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:11:36.390806   39129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:11:36.398342   39129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:11:36.398732   39129 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-786978" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.398833   39129 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-2739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-786978" cluster setting kubeconfig missing "functional-786978" context setting]
	I1211 00:11:36.399137   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.399560   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.399714   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.400253   39129 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 00:11:36.400273   39129 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 00:11:36.400281   39129 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 00:11:36.400286   39129 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 00:11:36.400291   39129 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 00:11:36.400594   39129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:11:36.400697   39129 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1211 00:11:36.409983   39129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1211 00:11:36.410015   39129 kubeadm.go:602] duration metric: took 19.271635ms to restartPrimaryControlPlane
	I1211 00:11:36.410025   39129 kubeadm.go:403] duration metric: took 55.966406ms to StartCluster
	I1211 00:11:36.410041   39129 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410105   39129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.410754   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410951   39129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 00:11:36.411375   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:36.411428   39129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 00:11:36.411496   39129 addons.go:70] Setting storage-provisioner=true in profile "functional-786978"
	I1211 00:11:36.411509   39129 addons.go:239] Setting addon storage-provisioner=true in "functional-786978"
	I1211 00:11:36.411539   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.412103   39129 addons.go:70] Setting default-storageclass=true in profile "functional-786978"
	I1211 00:11:36.412128   39129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-786978"
	I1211 00:11:36.412372   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.412555   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.416027   39129 out.go:179] * Verifying Kubernetes components...
	I1211 00:11:36.418962   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:36.445616   39129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 00:11:36.448584   39129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.448615   39129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 00:11:36.448687   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.455632   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.455806   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.456398   39129 addons.go:239] Setting addon default-storageclass=true in "functional-786978"
	I1211 00:11:36.456432   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.459345   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.488078   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.511255   39129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:36.511282   39129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 00:11:36.511350   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.540894   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.608214   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:36.665748   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.679982   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.404051   39129 node_ready.go:35] waiting up to 6m0s for node "functional-786978" to be "Ready" ...
	I1211 00:11:37.404239   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.404634   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404742   39129 retry.go:31] will retry after 310.125043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404824   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404858   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404893   39129 retry.go:31] will retry after 141.721995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404991   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:37.547464   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.613487   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.613562   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.613592   39129 retry.go:31] will retry after 561.758211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.715754   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:37.779510   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.779557   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.779585   39129 retry.go:31] will retry after 505.869102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.904810   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.904884   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.175539   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.243137   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.243185   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.243204   39129 retry.go:31] will retry after 361.539254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.286533   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:38.344606   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.348111   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.348157   39129 retry.go:31] will retry after 829.218438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.404431   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.404511   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.404881   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.605429   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.661283   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.664833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.664864   39129 retry.go:31] will retry after 800.266997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.905185   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.905301   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.905646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:39.178251   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:39.238429   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.238472   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.238493   39129 retry.go:31] will retry after 1.184749907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.405001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.405348   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:39.405424   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:39.465581   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:39.526474   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.526525   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.526544   39129 retry.go:31] will retry after 1.807004704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.905105   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.905423   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.405603   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.423936   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:40.495739   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:40.495794   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.495811   39129 retry.go:31] will retry after 1.404783651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.334388   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:41.396786   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.396852   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.396891   39129 retry.go:31] will retry after 1.10995967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.405068   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.405184   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.405534   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:41.405602   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:41.901437   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:41.905007   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.905077   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.905313   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.984043   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.984104   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.984123   39129 retry.go:31] will retry after 1.551735429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.404784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:42.507069   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:42.562010   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:42.565655   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.565695   39129 retry.go:31] will retry after 1.834850552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.904273   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.904413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.904767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.404422   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.536095   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:43.596578   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:43.596618   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.596641   39129 retry.go:31] will retry after 3.759083682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.905026   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.905109   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.905424   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:43.905474   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:44.401015   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:44.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.404608   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:44.466004   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:44.470131   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.470162   39129 retry.go:31] will retry after 3.734519465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.904450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.904746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.404448   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.404610   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.405391   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.905314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.905389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.905730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:45.905817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:46.404489   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.404597   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.404850   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:46.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.904888   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.905184   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.356864   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:47.404412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.420245   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:47.420295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.420315   39129 retry.go:31] will retry after 2.851566945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.904846   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.904912   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.905167   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:48.205865   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:48.269575   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:48.269614   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.269633   39129 retry.go:31] will retry after 3.250947796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.404858   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.404932   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.405259   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:48.405314   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:48.905121   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.905582   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.404258   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.404342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.272194   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:50.327238   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:50.331229   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.331261   39129 retry.go:31] will retry after 4.377849152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.404603   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.404681   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.404972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.904412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:50.904763   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:51.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.404469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:51.521211   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:51.575865   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:51.579753   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.579788   39129 retry.go:31] will retry after 10.380601314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.905566   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.405257   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.405613   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.904681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:53.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:53.404852   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:53.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.904440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.904804   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.404471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.404754   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.709241   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:54.767641   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:54.771055   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.771086   39129 retry.go:31] will retry after 5.957769887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.904303   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.904730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.404312   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.404383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.404693   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.904394   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:55.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:56.404616   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.404692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.405015   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:56.904919   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.904989   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.905263   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.405131   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.905419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.905761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:57.905821   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:58.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.404407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.404667   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:58.904372   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.404718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.904404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:00.404425   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.404531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.404943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:00.405022   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:00.729113   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:00.791242   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:00.794799   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.794830   39129 retry.go:31] will retry after 11.484844112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.905270   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.405214   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.405547   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.904696   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.904770   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.905114   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.961328   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:02.020749   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:02.024939   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.024971   39129 retry.go:31] will retry after 14.651232328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:02.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:02.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:03.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:03.904466   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.904548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.404457   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.404546   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.904381   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.904772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:04.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:05.404564   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.404650   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.405040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:05.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.404608   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.404684   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.405046   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.905071   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.905390   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:06.905442   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:07.405193   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.405265   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.405584   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:07.904280   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.904352   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.404398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.904498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:09.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.404791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:09.404848   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:09.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.404523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.904428   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.904505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.904831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:11.904892   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:12.280537   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:12.342793   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:12.342833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.342853   39129 retry.go:31] will retry after 23.205348466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.405205   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.405602   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:12.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.904717   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.404271   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.905297   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.905373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.905750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:13.905805   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:14.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:14.904352   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.904734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.904784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:16.404614   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.404686   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.405057   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:16.405114   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:16.676815   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:16.732715   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:16.736183   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.736213   39129 retry.go:31] will retry after 30.816141509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.404776   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.904286   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.904361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.904615   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.404395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.904448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.904755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:18.904810   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:19.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.404533   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:19.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.904394   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.904694   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:21.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.404473   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:21.404887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:21.904789   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.904874   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.905204   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.405273   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.905073   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.905146   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.905464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:23.405279   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.405347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.405687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:23.405741   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:23.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.904659   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.404824   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.404296   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.904463   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.904801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:25.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:26.404631   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.404718   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.405047   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:26.904918   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.904987   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.905309   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.405154   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.405588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.904400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:28.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.404689   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:28.404748   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:28.904331   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.904750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.404573   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.404959   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.904646   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.904725   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.905092   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:30.404773   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.404846   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.405165   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:30.405221   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:30.904956   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.905034   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.905377   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.405001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.405072   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.405325   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.905650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.904301   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.904387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.904648   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:32.904697   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:33.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.404825   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:33.904520   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.904591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.404711   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.904339   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.904412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:34.904798   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:35.404390   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:35.549321   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:35.607106   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:35.610743   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.610780   39129 retry.go:31] will retry after 16.241459848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.905109   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.905200   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.905468   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.404514   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.904881   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.905210   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:36.905281   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:37.404334   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:37.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.904509   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.904813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.404408   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.404481   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:39.404416   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.404510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:39.404920   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:39.904654   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.904746   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.905070   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.404756   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.404825   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.905026   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.905372   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:41.405159   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.405236   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.405596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:41.405654   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:41.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.904410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.404773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.904495   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.904570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.404638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:43.904791   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:44.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:44.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.904643   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.405043   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.405120   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.905241   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.905313   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.905665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:45.905721   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:46.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.404665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:46.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.904443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.904803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.404531   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.404614   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.404913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.553376   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:47.607763   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:47.611288   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.611317   39129 retry.go:31] will retry after 35.21019071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.904951   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.905249   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:48.405085   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.405161   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.405471   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:48.405525   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:48.905284   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.905364   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.905681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.405295   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.405377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.405636   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.404362   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.904691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:50.904742   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:51.404407   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.404485   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.404838   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.852477   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:51.904839   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.904910   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.905174   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.907207   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910785   39129 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:12:52.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:52.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.904765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:52.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:53.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.404945   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:53.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.904430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.404439   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.904458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:55.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.404347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:55.404733   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:55.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.404550   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.404631   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.404976   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.904790   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.904860   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.905139   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:57.404944   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.405013   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.405350   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:57.405406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:57.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.905273   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.905640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.405189   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.405260   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.405511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.905275   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.905353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.905724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.404712   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.904425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:59.904732   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:00.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.404486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:00.904965   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.905043   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.905388   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.405097   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.405176   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.405439   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.904725   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.904806   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.905152   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:01.905207   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:02.404978   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.405084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.405396   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:02.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.905264   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.905532   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.405309   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.405405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.405763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:04.404467   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.404555   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:04.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:04.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.904484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.904554   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.904870   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:06.404545   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.404613   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.404937   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:06.404991   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:06.904732   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.904814   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.905130   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.404806   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.404877   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.405129   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.904906   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.904976   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:08.405133   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.405212   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.405523   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:08.405575   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:08.905290   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.905357   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.404766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.904501   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.904588   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.904943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.404293   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.404651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:10.904861   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:11.404508   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.404642   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:11.904763   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.904841   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.905096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.404345   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.904307   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.904388   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:13.404447   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:13.404835   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.904421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.904745   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.404439   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.904637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.404367   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.904488   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.904581   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:15.904954   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:16.404512   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.404576   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.404846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:16.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.904870   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.404863   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.404963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.405289   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.905011   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.905075   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.905318   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:17.905356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:18.405098   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.405169   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.405467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:18.905238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.905637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.404323   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.904449   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.904524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.904900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:20.404601   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.405009   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:20.405059   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:20.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.904383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.904630   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.404435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.904577   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.904658   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.905033   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.404711   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.404786   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.405042   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.821681   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:13:22.876683   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880396   39129 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:13:22.883693   39129 out.go:179] * Enabled addons: 
	I1211 00:13:22.887530   39129 addons.go:530] duration metric: took 1m46.476102717s for enable addons: enabled=[]
	I1211 00:13:22.904608   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.904678   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.904957   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:22.905000   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:23.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:23.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.404395   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.904476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.904551   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.904854   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:25.404225   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.404302   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.404557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:25.404605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:25.905344   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.905433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.905756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.404719   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.405097   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.904886   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.904949   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:27.404947   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.405328   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:27.405384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:27.905093   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.905485   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.405246   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.405317   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.405598   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.904844   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.904917   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.905225   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:29.405028   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.405117   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.405404   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:29.405449   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:29.905168   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.905247   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.905504   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.405258   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.405331   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.405639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.404468   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.404537   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.904867   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.905218   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:31.905275   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:32.405039   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.405110   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:32.905101   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.905197   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.905510   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.405238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.405316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.905361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.905671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:33.905728   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:34.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.404382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.404620   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:34.904316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.904389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.904718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.404512   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.904617   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.904692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.908415   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:13:35.908524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:36.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:36.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.404692   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.404758   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.405006   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.904670   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.904745   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.905089   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:38.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.404992   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.405353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:38.405405   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:38.905139   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.905213   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.905467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.405228   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.405305   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.904766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.404302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.404373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.904753   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:40.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:41.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.404779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:41.904302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.904720   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:43.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.404647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:43.404698   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:43.904384   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.904781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.404476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.904695   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:45.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:45.404841   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:45.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.904815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.404520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.904418   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.904492   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.904688   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:47.904737   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:48.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.404478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.404831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:48.904535   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.904627   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.904963   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.404424   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.404502   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.904361   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:49.904846   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:50.404484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.404567   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:50.904312   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.904631   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.404397   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.904771   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.904845   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.905178   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:51.905230   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:52.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.404412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:52.904443   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.904515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.904867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.404636   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.404950   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.904654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:54.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:54.904552   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.404658   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.404733   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.405025   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:56.404581   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.404661   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.404984   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:56.405049   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:56.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.404988   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.405064   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.405398   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.905216   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.905575   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.404252   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.404664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.904398   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.904491   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:58.904844   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:59.404513   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.404873   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:59.904538   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.904626   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.904952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.404414   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.404845   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.904385   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.904466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.904782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:01.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.404702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:01.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:01.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.404443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.904253   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.904328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.904579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.404289   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:03.904802   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:04.404326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.404677   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:04.904415   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.904294   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.904370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.904638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:06.404572   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.404651   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.404978   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:06.405038   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:06.904928   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.905005   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.905317   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.405083   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.405159   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.905272   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.905606   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:08.405305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.405379   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.405705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:08.405759   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:08.904405   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.904478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.404479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.404900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.904480   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.904557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.904874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.404389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.904442   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.904520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.904925   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:10.904988   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:11.404652   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.404728   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.405053   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:11.904891   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.904965   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.905216   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.405049   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.405126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.405453   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.905247   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.905323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.905654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:12.905713   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:13.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.404632   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:13.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.404802   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.904475   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.904544   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:15.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:15.404807   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:15.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.404677   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.404753   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.405004   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.904978   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.905048   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:17.405158   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.405235   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.405552   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:17.405610   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:17.904261   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.904334   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.404411   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.404498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.404847   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.904472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.904738   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.404737   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.904453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.904768   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:19.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:20.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.404818   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:20.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.904822   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.404460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.404763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:22.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.404422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.404708   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:22.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:22.904479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.904556   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.904841   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.404574   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.904305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.904373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.404251   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.404672   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.904409   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.904486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:24.904887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:25.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.404461   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.404736   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:25.904509   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.904583   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.404731   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.404818   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.904985   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.905061   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.905327   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:26.905366   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:27.405132   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.405207   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:27.905312   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.905383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.905699   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.404639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:29.404330   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:29.404817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:29.904445   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.904517   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.904836   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.404772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:31.404452   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.404538   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.404813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:31.404867   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:31.904825   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.904902   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.905256   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.405133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.405434   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.905146   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.905216   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.905460   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:33.405223   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.405303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.405614   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:33.405669   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:33.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.904380   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.904353   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.404418   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.904262   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.904332   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:35.904703   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:36.404548   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.404942   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:36.904920   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.905001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.405180   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.405250   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.405549   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:37.904735   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:38.404400   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:38.904471   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.904540   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.904868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.404739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:40.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.404655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:40.404705   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:40.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.404749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.904650   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.904717   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.904964   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:42.404693   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.404775   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.405115   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:42.405176   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:42.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.905044   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.905384   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.405173   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.405244   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.405506   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.904350   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.904566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:44.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:45.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.404848   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:45.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.404605   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.404878   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.905004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.905351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:46.905406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:47.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.405597   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:47.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.904346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.904600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.404314   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.904530   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.904960   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:49.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:49.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:49.904389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.904823   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.404429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.404707   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.904397   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.904467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:51.904876   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:52.404539   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.404611   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.404868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:52.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.904488   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.404507   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.404909   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.904751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:54.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:54.404785   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:54.905091   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.905461   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.405287   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.405536   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.905310   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.905400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.905792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:56.404664   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.404738   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.405079   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:56.405134   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:56.904863   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.904929   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.905177   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.404950   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.405032   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.405383   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.905061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.905135   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.905490   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:58.405233   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.405306   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.405559   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:58.405605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:58.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.904345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.404487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.404786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.904269   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.904338   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.904596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.404353   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.904439   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.904522   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.904908   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:00.904971   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:01.404441   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:01.904840   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.904916   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.905261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.405074   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.405158   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.405505   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.905255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.905626   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:02.905685   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:03.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:03.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.904501   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.904287   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.904363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.904668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:05.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:05.404809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:05.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.904390   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.404538   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.404621   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.404968   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.905084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.905399   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:07.405134   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.405202   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.405455   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:07.405496   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:07.905236   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.905316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.905668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.404259   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.404335   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.404669   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.904348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.904675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.404767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.904456   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.904528   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.904872   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:09.904926   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:10.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.404420   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:10.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.904438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.404399   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.904304   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.904386   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.904651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:12.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.404820   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:12.404875   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:12.904549   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.904630   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.405324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.405622   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.904354   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.404751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:14.904865   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:15.404372   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:15.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.404554   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.904714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.905117   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:16.905186   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:17.404926   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.404997   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.405333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:17.905100   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.905177   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.905446   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.405312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.405665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.904275   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.904355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:19.404415   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.404483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:19.404886   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:19.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.904600   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.904972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:21.404647   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.405062   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:21.405116   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:21.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.905031   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.405138   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.405205   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.905211   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.905339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.905644   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.404356   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.404765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.904406   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:23.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:24.404449   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:24.904567   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.904647   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.904980   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.404591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.404896   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:25.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:26.404543   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.404952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:26.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.905041   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.404714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.404795   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.904869   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.904942   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.905254   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:27.905309   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:28.405022   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.405096   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.405402   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:28.905177   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.905254   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.404313   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.404393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.904395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.904647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:30.404357   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:30.404784   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:30.904438   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.904510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.904846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.404410   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.404482   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.404742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.904715   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.905138   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:32.404902   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.404973   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.405298   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:32.405356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:32.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.905100   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.905353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.405141   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.405225   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.405565   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.904778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.404473   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.404543   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.404861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.904347   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.904417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:34.904828   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:35.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:35.904565   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.904641   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.904947   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.404649   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.404729   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.405029   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.904816   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.904901   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.905206   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:36.905255   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:37.404887   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.404952   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.405287   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:37.904915   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.904985   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.905278   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.405464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.905056   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.905124   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.905378   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:38.905418   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:39.405219   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.405647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:39.904336   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.404291   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.404607   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.904308   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:41.404428   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.404503   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:41.404925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:41.904306   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.904378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.904685   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.404364   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:43.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.405484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:43.405524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:43.905240   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.905312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.905656   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.404373   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.404764   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.904503   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.904579   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.904930   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:45.905003   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:46.404675   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.404755   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.405031   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:46.904971   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.905045   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.905387   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.405184   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.405266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.405600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.904290   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.904358   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:48.404282   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.404778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:48.404837   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:48.904518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.904616   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.904965   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.404851   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.904395   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.904468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.904810   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.904414   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.904743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:50.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:51.404343   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.404705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:51.904585   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.904663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.904998   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.404551   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.404875   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:53.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:53.404819   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:53.904321   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.904670   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.404457   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.904444   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.904531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.904876   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:55.904925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:56.404575   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.404977   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:56.904836   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.904913   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.404951   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.405027   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.405355   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.905048   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.905133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.905458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:57.905511   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:58.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:58.905188   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.905266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.905560   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.404349   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.404684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.904359   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.904655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:00.404418   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.404515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.404905   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:00.404957   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:00.904950   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.905035   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.905354   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.405093   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.405167   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:02.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.404580   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:02.404978   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:02.904523   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.904595   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.904914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.404782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.904572   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.904928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.404607   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.404926   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.904614   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.905032   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:04.905090   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:05.404755   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.404828   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.405160   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:05.904838   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.904911   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.905161   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.405082   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.904483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:07.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.404812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:07.404870   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:07.904508   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.904584   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.404619   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.404701   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.405096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.904962   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:09.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.405142   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.405475   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:09.405528   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:09.905172   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.905256   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.905577   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.404232   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.404303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.404506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.904829   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.904896   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.905193   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:11.905238   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:12.405036   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.405112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:12.905320   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.905392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.905721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:14.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.404817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:14.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:14.904559   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.904931   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.904365   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.904442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:16.404605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.404941   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:16.404981   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:16.905031   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.905112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.905444   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.405328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.405654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.904661   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.404405   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.404476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.904504   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.904587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:18.904955   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:19.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.404743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:19.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.404391   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:21.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.404792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:21.404845   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:21.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.904419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.404420   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.404499   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.904477   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.904552   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.904882   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:23.404436   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.404513   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:23.404895   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:23.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.404381   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.404453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.904499   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.904892   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:25.404586   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.404659   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:25.404961   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:25.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.904779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.404592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.404907   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.905185   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:27.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.405019   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.405351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:27.405411   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:27.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.905014   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.905324   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.405117   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.405196   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.405450   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.905212   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.905284   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.404346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.404683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.904353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:29.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:30.404469   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.404548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.404898   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:30.904597   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.904677   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.904983   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.904776   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.904848   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.905203   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:31.905262   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:32.405016   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.405089   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.405412   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:32.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.905517   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.405261   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.405640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.905301   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.905376   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.905691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:33.905753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:34.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:34.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.904476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.404634   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.404928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.904571   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.904645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.904953   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:36.404628   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.405010   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:36.405063   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:36.904947   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.905022   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.405100   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.405170   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.405498   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.905264   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.905342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.905865   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:38.404591   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.404679   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.405054   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:38.405110   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:38.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.904721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.904517   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.904592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.404740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.904441   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:40.904783   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:41.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:41.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.904798   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.905060   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:42.904822   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:43.404423   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:43.904350   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.904451   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.404461   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.404566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.904550   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.904637   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.904902   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:44.904951   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:45.404609   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:45.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.404515   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.405024   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.904957   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.905029   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:46.905384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:47.405088   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.405172   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:47.905049   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.905126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.905389   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.405194   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.405268   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.405562   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.905286   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.905355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.905692   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:48.905744   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:49.404266   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:49.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.404462   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.404547   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.904650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:51.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.404729   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:51.404787   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:51.904747   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.904831   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.404930   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.405004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.405261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.904990   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.905058   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.905363   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:53.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.405240   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.405638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:53.405695   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:53.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.905363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.905633   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.404809   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.904605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.904687   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.905040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.404740   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.404817   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.405074   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.904834   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:55.904885   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:56.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.404722   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.405063   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:56.904894   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.904963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.905258   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.405073   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.405147   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.405497   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.905415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.905765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:57.905820   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:58.404453   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.404534   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.404874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:58.904362   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.404465   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.404914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.904595   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.904668   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.904932   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:00.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.404598   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.404940   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:00.404990   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:00.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.905010   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.905344   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.405546   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.904448   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.904523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.904989   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:02.404570   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.404663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.405065   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:02.405126   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:02.904936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.905055   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.905428   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.405280   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.405375   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.405771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.904496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.904861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:04.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:05.404438   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.404897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:05.904937   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.905042   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.905515   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.404643   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.404731   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.405084   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.904983   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.905060   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.905437   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:06.905502   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:07.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.405297   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.405568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:07.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.404553   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.404970   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.904274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.904585   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:09.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.404363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:09.404710   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:09.904270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.904653   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.405242   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.405315   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.405609   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.904309   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.904702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:11.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:11.404842   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:11.904759   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.904838   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.905118   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.404893   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.404967   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.405296   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.905088   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.905195   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.905511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.405329   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.405579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:13.405619   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:13.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.904761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.404335   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.404408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.404714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.904317   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.904641   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.404300   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.404706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.904411   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.904487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.904783   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:15.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:16.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.404582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:16.904777   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.904859   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.404976   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.405047   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.405346   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.905085   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.905148   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.905484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:17.905576   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:18.405320   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.405400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.405752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:18.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.904452   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.404450   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.404524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.404839   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.904351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:20.404466   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:20.404929   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:20.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.404759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.404295   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.404368   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.404715   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.904484   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:22.904863   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:23.404333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.404410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.404731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:23.904406   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.404371   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.904474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:24.904898   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:25.404303   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.404370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:25.904598   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.904676   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.905012   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.404650   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.405090   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.904820   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.904890   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.905169   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:26.905212   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:27.404936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.405356   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:27.905133   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.905529   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.405274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.405341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.405686   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:29.404458   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.404541   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:29.404943   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:29.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.904367   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.904684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.404462   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.904507   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.904891   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.404374   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.904703   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.904772   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.908235   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:17:31.908301   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:32.405044   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.405123   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.405443   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:32.905095   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.905166   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.905421   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.405170   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.405251   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.405557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.905635   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:34.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.404675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:34.404722   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:34.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.904444   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.904434   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.904506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.904785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:36.404582   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.404662   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.404987   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:36.405043   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:36.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.904799   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:37.404341   39129 type.go:168] "Request Body" body=""
	I1211 00:17:37.404399   39129 node_ready.go:38] duration metric: took 6m0.000266247s for node "functional-786978" to be "Ready" ...
	I1211 00:17:37.407624   39129 out.go:203] 
	W1211 00:17:37.410619   39129 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1211 00:17:37.410819   39129 out.go:285] * 
	W1211 00:17:37.413036   39129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:17:37.415867   39129 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.683978999Z" level=info msg="Using the internal default seccomp profile"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.683987065Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.68399408Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684000078Z" level=info msg="RDT not available in the host system"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684012575Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684721181Z" level=info msg="Conmon does support the --sync option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684744254Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.684759869Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.685472873Z" level=info msg="Conmon does support the --sync option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.685489841Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.685616727Z" level=info msg="Updated default CNI network name to "
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.686203986Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.686552619Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.686608021Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.72384117Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.723993648Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724046014Z" level=info msg="Create NRI interface"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724146438Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724162447Z" level=info msg="runtime interface created"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724176445Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724182738Z" level=info msg="runtime interface starting up..."
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.72418872Z" level=info msg="starting plugins..."
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724200634Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:11:34 functional-786978 crio[5370]: time="2025-12-11T00:11:34.724260606Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:11:34 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:17:42.041608    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:42.042191    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:42.043961    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:42.044549    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:42.046280    8755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:17:42 up 29 min,  0 user,  load average: 0.31, 0.29, 0.47
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:17:39 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:40 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 11 00:17:40 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:40 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:40 functional-786978 kubelet[8631]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:40 functional-786978 kubelet[8631]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:40 functional-786978 kubelet[8631]: E1211 00:17:40.469106    8631 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:40 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:40 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:41 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 815.
	Dec 11 00:17:41 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:41 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:41 functional-786978 kubelet[8667]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:41 functional-786978 kubelet[8667]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:41 functional-786978 kubelet[8667]: E1211 00:17:41.215378    8667 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:41 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:41 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:41 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 816.
	Dec 11 00:17:41 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:41 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:41 functional-786978 kubelet[8738]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:41 functional-786978 kubelet[8738]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:41 functional-786978 kubelet[8738]: E1211 00:17:41.962758    8738 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:41 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:41 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (319.99995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 kubectl -- --context functional-786978 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 kubectl -- --context functional-786978 get pods: exit status 1 (107.675547ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-786978 kubectl -- --context functional-786978 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (317.761692ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 logs -n 25: (1.068497443s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-976823 image ls --format short --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format json --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ ssh     │ functional-976823 ssh pgrep buildkitd                                                                                                             │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ image   │ functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr                                            │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls                                                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format yaml --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format table --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ delete  │ -p functional-976823                                                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ start   │ -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ start   │ -p functional-786978 --alsologtostderr -v=8                                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:11 UTC │                     │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:latest                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add minikube-local-cache-test:functional-786978                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache delete minikube-local-cache-test:functional-786978                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl images                                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ cache   │ functional-786978 cache reload                                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ kubectl │ functional-786978 kubectl -- --context functional-786978 get pods                                                                                 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:11:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:11:31.563230   39129 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:11:31.563658   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563678   39129 out.go:374] Setting ErrFile to fd 2...
	I1211 00:11:31.563685   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563986   39129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:11:31.564407   39129 out.go:368] Setting JSON to false
	I1211 00:11:31.565211   39129 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1378,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:11:31.565283   39129 start.go:143] virtualization:  
	I1211 00:11:31.568710   39129 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:11:31.572525   39129 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:11:31.572647   39129 notify.go:221] Checking for updates...
	I1211 00:11:31.578309   39129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:11:31.581264   39129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:31.584071   39129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:11:31.586801   39129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:11:31.589632   39129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:11:31.593067   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:31.593203   39129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:11:31.624525   39129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:11:31.624640   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.680227   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.670392474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.680335   39129 docker.go:319] overlay module found
	I1211 00:11:31.683507   39129 out.go:179] * Using the docker driver based on existing profile
	I1211 00:11:31.686334   39129 start.go:309] selected driver: docker
	I1211 00:11:31.686351   39129 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.686457   39129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:11:31.686564   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.744265   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.73545255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.744665   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:31.744728   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:31.744781   39129 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.747938   39129 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:11:31.750895   39129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:11:31.753857   39129 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:11:31.756592   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:31.756636   39129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:11:31.756650   39129 cache.go:65] Caching tarball of preloaded images
	I1211 00:11:31.756687   39129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:11:31.756736   39129 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:11:31.756746   39129 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:11:31.756847   39129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:11:31.775263   39129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:11:31.775283   39129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:11:31.775304   39129 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:11:31.775335   39129 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:11:31.775391   39129 start.go:364] duration metric: took 34.412µs to acquireMachinesLock for "functional-786978"
	I1211 00:11:31.775414   39129 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:11:31.775420   39129 fix.go:54] fixHost starting: 
	I1211 00:11:31.775679   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:31.791888   39129 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:11:31.791920   39129 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:11:31.795111   39129 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:11:31.795143   39129 machine.go:94] provisionDockerMachine start ...
	I1211 00:11:31.795229   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.811419   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.811754   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.811770   39129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:11:31.962366   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:31.962392   39129 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:11:31.962456   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.979928   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.980236   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.980251   39129 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:11:32.139976   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:32.140054   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.158886   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.159253   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.159279   39129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:11:32.307553   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:11:32.307588   39129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:11:32.307609   39129 ubuntu.go:190] setting up certificates
	I1211 00:11:32.307618   39129 provision.go:84] configureAuth start
	I1211 00:11:32.307677   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:32.326881   39129 provision.go:143] copyHostCerts
	I1211 00:11:32.326928   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.326981   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:11:32.326990   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.327094   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:11:32.327189   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327219   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:11:32.327229   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327259   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:11:32.327306   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327328   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:11:32.327337   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327369   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:11:32.327438   39129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:11:32.651770   39129 provision.go:177] copyRemoteCerts
	I1211 00:11:32.651883   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:11:32.651966   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.672496   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:32.786699   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 00:11:32.786771   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:11:32.804288   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 00:11:32.804348   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:11:32.822111   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 00:11:32.822172   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 00:11:32.839310   39129 provision.go:87] duration metric: took 531.679958ms to configureAuth
	I1211 00:11:32.839337   39129 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:11:32.839540   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:32.839656   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.857209   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.857554   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.857577   39129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:11:33.187304   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:11:33.187369   39129 machine.go:97] duration metric: took 1.392217167s to provisionDockerMachine
	I1211 00:11:33.187397   39129 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:11:33.187428   39129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:11:33.187507   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:11:33.187571   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.206116   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.310766   39129 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:11:33.313950   39129 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1211 00:11:33.313971   39129 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1211 00:11:33.313977   39129 command_runner.go:130] > VERSION_ID="12"
	I1211 00:11:33.313982   39129 command_runner.go:130] > VERSION="12 (bookworm)"
	I1211 00:11:33.313987   39129 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1211 00:11:33.313990   39129 command_runner.go:130] > ID=debian
	I1211 00:11:33.313995   39129 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1211 00:11:33.314000   39129 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1211 00:11:33.314006   39129 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1211 00:11:33.314074   39129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:11:33.314099   39129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:11:33.314110   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:11:33.314165   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:11:33.314254   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:11:33.314265   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /etc/ssl/certs/48752.pem
	I1211 00:11:33.314342   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:11:33.314349   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> /etc/test/nested/copy/4875/hosts
	I1211 00:11:33.314395   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:11:33.321833   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:33.338845   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:11:33.355788   39129 start.go:296] duration metric: took 168.358579ms for postStartSetup
	I1211 00:11:33.355933   39129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:11:33.355981   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.374136   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.483570   39129 command_runner.go:130] > 14%
	I1211 00:11:33.484133   39129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:11:33.488331   39129 command_runner.go:130] > 168G
	I1211 00:11:33.488874   39129 fix.go:56] duration metric: took 1.713448769s for fixHost
	I1211 00:11:33.488896   39129 start.go:83] releasing machines lock for "functional-786978", held for 1.713491657s
	I1211 00:11:33.488966   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:33.505970   39129 ssh_runner.go:195] Run: cat /version.json
	I1211 00:11:33.506004   39129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:11:33.506020   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.506067   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.524523   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.532688   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.712031   39129 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1211 00:11:33.714840   39129 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1211 00:11:33.715004   39129 ssh_runner.go:195] Run: systemctl --version
	I1211 00:11:33.720988   39129 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1211 00:11:33.721023   39129 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1211 00:11:33.721418   39129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:11:33.758142   39129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1211 00:11:33.762640   39129 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1211 00:11:33.762695   39129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:11:33.762759   39129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:11:33.770580   39129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:11:33.770605   39129 start.go:496] detecting cgroup driver to use...
	I1211 00:11:33.770636   39129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:11:33.770683   39129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:11:33.785751   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:11:33.798781   39129 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:11:33.798859   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:11:33.814594   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:11:33.828060   39129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:11:33.939426   39129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:11:34.063996   39129 docker.go:234] disabling docker service ...
	I1211 00:11:34.064079   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:11:34.088847   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:11:34.106427   39129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:11:34.233444   39129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:11:34.359250   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:11:34.371772   39129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:11:34.384768   39129 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1211 00:11:34.385910   39129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:11:34.386015   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.395329   39129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:11:34.395408   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.404378   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.412986   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.421585   39129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:11:34.429722   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.438361   39129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.447060   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.456153   39129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:11:34.462793   39129 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1211 00:11:34.463922   39129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:11:34.471096   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:34.576052   39129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:11:34.729272   39129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:11:34.729346   39129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:11:34.732930   39129 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1211 00:11:34.732954   39129 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1211 00:11:34.732962   39129 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1211 00:11:34.732969   39129 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:34.732973   39129 command_runner.go:130] > Access: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732985   39129 command_runner.go:130] > Modify: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732992   39129 command_runner.go:130] > Change: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732995   39129 command_runner.go:130] >  Birth: -
	I1211 00:11:34.733171   39129 start.go:564] Will wait 60s for crictl version
	I1211 00:11:34.733232   39129 ssh_runner.go:195] Run: which crictl
	I1211 00:11:34.736601   39129 command_runner.go:130] > /usr/local/bin/crictl
	I1211 00:11:34.736687   39129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:11:34.757793   39129 command_runner.go:130] > Version:  0.1.0
	I1211 00:11:34.757906   39129 command_runner.go:130] > RuntimeName:  cri-o
	I1211 00:11:34.757921   39129 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1211 00:11:34.757928   39129 command_runner.go:130] > RuntimeApiVersion:  v1
	I1211 00:11:34.760151   39129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:11:34.760230   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.787961   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.787986   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.787993   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.787998   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.788005   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.788009   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.788013   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.788019   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.788024   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.788028   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.788035   39129 command_runner.go:130] >      static
	I1211 00:11:34.788039   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.788043   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.788051   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.788055   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.788058   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.788069   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.788074   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.788080   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.788088   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.789644   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.815359   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.815385   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.815392   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.815397   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.815402   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.815425   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.815432   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.815439   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.815448   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.815452   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.815456   39129 command_runner.go:130] >      static
	I1211 00:11:34.815460   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.815468   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.815473   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.815480   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.815484   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.815491   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.815496   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.815505   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.815512   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.822208   39129 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:11:34.825193   39129 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:11:34.839960   39129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:11:34.843868   39129 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1211 00:11:34.843970   39129 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:11:34.844072   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:34.844127   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.876890   39129 command_runner.go:130] > {
	I1211 00:11:34.876911   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.876915   39129 command_runner.go:130] >     {
	I1211 00:11:34.876923   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.876928   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.876934   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.876937   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876941   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.876951   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.876963   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.876967   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876971   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.876979   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.876984   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.876987   39129 command_runner.go:130] >     },
	I1211 00:11:34.876991   39129 command_runner.go:130] >     {
	I1211 00:11:34.876997   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.877005   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877011   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.877014   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877018   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877026   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.877038   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.877042   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877046   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.877053   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877060   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877067   39129 command_runner.go:130] >     },
	I1211 00:11:34.877070   39129 command_runner.go:130] >     {
	I1211 00:11:34.877077   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.877089   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877094   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.877098   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877113   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877124   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.877132   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.877139   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877143   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.877147   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.877151   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877154   39129 command_runner.go:130] >     },
	I1211 00:11:34.877158   39129 command_runner.go:130] >     {
	I1211 00:11:34.877165   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.877171   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877176   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.877180   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877186   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877194   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.877204   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.877211   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877216   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.877219   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877224   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877234   39129 command_runner.go:130] >       },
	I1211 00:11:34.877242   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877253   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877257   39129 command_runner.go:130] >     },
	I1211 00:11:34.877260   39129 command_runner.go:130] >     {
	I1211 00:11:34.877267   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.877271   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877280   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.877287   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877291   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877299   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.877309   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.877317   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877326   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.877334   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877343   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877347   39129 command_runner.go:130] >       },
	I1211 00:11:34.877351   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877359   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877363   39129 command_runner.go:130] >     },
	I1211 00:11:34.877367   39129 command_runner.go:130] >     {
	I1211 00:11:34.877374   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.877381   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877387   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.877390   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877394   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877411   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.877420   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.877426   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877430   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.877434   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877438   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877441   39129 command_runner.go:130] >       },
	I1211 00:11:34.877445   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877450   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877455   39129 command_runner.go:130] >     },
	I1211 00:11:34.877459   39129 command_runner.go:130] >     {
	I1211 00:11:34.877473   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.877476   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.877490   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877494   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877502   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.877512   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.877516   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877520   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.877527   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877534   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877538   39129 command_runner.go:130] >     },
	I1211 00:11:34.877550   39129 command_runner.go:130] >     {
	I1211 00:11:34.877556   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.877560   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877565   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.877571   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877575   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877582   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.877602   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.877606   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877614   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.877618   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877630   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877633   39129 command_runner.go:130] >       },
	I1211 00:11:34.877636   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877640   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877646   39129 command_runner.go:130] >     },
	I1211 00:11:34.877649   39129 command_runner.go:130] >     {
	I1211 00:11:34.877656   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.877662   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877667   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.877670   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877674   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877681   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.877695   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.877699   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877703   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.877707   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877714   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.877717   39129 command_runner.go:130] >       },
	I1211 00:11:34.877721   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877732   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.877738   39129 command_runner.go:130] >     }
	I1211 00:11:34.877741   39129 command_runner.go:130] >   ]
	I1211 00:11:34.877744   39129 command_runner.go:130] > }
	I1211 00:11:34.877906   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.877920   39129 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:11:34.877980   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.904837   39129 command_runner.go:130] > {
	I1211 00:11:34.904873   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.904879   39129 command_runner.go:130] >     {
	I1211 00:11:34.904887   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.904893   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904899   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.904903   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904925   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.904940   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.904949   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.904958   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904962   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.904966   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.904971   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.904975   39129 command_runner.go:130] >     },
	I1211 00:11:34.904978   39129 command_runner.go:130] >     {
	I1211 00:11:34.904985   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.904989   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904999   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.905010   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905015   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905023   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.905032   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.905038   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905042   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.905046   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905054   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905064   39129 command_runner.go:130] >     },
	I1211 00:11:34.905068   39129 command_runner.go:130] >     {
	I1211 00:11:34.905075   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.905079   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905084   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.905090   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905095   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905103   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.905113   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.905121   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905126   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.905130   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.905134   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905143   39129 command_runner.go:130] >     },
	I1211 00:11:34.905146   39129 command_runner.go:130] >     {
	I1211 00:11:34.905153   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.905162   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905167   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.905170   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905175   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905182   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.905192   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.905195   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905199   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.905209   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905217   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905228   39129 command_runner.go:130] >       },
	I1211 00:11:34.905237   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905244   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905248   39129 command_runner.go:130] >     },
	I1211 00:11:34.905251   39129 command_runner.go:130] >     {
	I1211 00:11:34.905258   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.905262   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905267   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.905272   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905276   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905284   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.905295   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.905302   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905306   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.905310   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905315   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905322   39129 command_runner.go:130] >       },
	I1211 00:11:34.905326   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905330   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905334   39129 command_runner.go:130] >     },
	I1211 00:11:34.905337   39129 command_runner.go:130] >     {
	I1211 00:11:34.905351   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.905355   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905361   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.905368   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905378   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905391   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.905400   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.905408   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905413   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.905417   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905424   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905431   39129 command_runner.go:130] >       },
	I1211 00:11:34.905435   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905439   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905441   39129 command_runner.go:130] >     },
	I1211 00:11:34.905444   39129 command_runner.go:130] >     {
	I1211 00:11:34.905451   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.905457   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905463   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.905466   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905470   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.905492   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.905496   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905500   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.905509   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905513   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905516   39129 command_runner.go:130] >     },
	I1211 00:11:34.905519   39129 command_runner.go:130] >     {
	I1211 00:11:34.905526   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.905535   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905541   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.905544   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905548   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905556   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.905573   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.905577   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905581   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.905585   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905589   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905592   39129 command_runner.go:130] >       },
	I1211 00:11:34.905596   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905604   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905612   39129 command_runner.go:130] >     },
	I1211 00:11:34.905619   39129 command_runner.go:130] >     {
	I1211 00:11:34.905625   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.905629   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905634   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.905637   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905641   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905657   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.905665   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.905671   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905675   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.905679   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905683   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.905686   39129 command_runner.go:130] >       },
	I1211 00:11:34.905690   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905697   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.905700   39129 command_runner.go:130] >     }
	I1211 00:11:34.905703   39129 command_runner.go:130] >   ]
	I1211 00:11:34.905705   39129 command_runner.go:130] > }
	I1211 00:11:34.908324   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.908347   39129 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:11:34.908354   39129 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:11:34.908461   39129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:11:34.908543   39129 ssh_runner.go:195] Run: crio config
	I1211 00:11:34.971791   39129 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1211 00:11:34.971813   39129 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1211 00:11:34.971821   39129 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1211 00:11:34.971824   39129 command_runner.go:130] > #
	I1211 00:11:34.971832   39129 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1211 00:11:34.971839   39129 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1211 00:11:34.971846   39129 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1211 00:11:34.971853   39129 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1211 00:11:34.971857   39129 command_runner.go:130] > # reload'.
	I1211 00:11:34.971875   39129 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1211 00:11:34.971882   39129 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1211 00:11:34.971888   39129 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1211 00:11:34.971894   39129 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1211 00:11:34.971898   39129 command_runner.go:130] > [crio]
	I1211 00:11:34.971903   39129 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1211 00:11:34.971908   39129 command_runner.go:130] > # containers images, in this directory.
	I1211 00:11:34.972453   39129 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1211 00:11:34.972468   39129 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1211 00:11:34.973023   39129 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1211 00:11:34.973035   39129 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1211 00:11:34.973741   39129 command_runner.go:130] > # imagestore = ""
	I1211 00:11:34.973760   39129 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1211 00:11:34.973768   39129 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1211 00:11:34.973950   39129 command_runner.go:130] > # storage_driver = "overlay"
	I1211 00:11:34.973965   39129 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1211 00:11:34.973972   39129 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1211 00:11:34.974083   39129 command_runner.go:130] > # storage_option = [
	I1211 00:11:34.974240   39129 command_runner.go:130] > # ]
	I1211 00:11:34.974255   39129 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1211 00:11:34.974262   39129 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1211 00:11:34.974433   39129 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1211 00:11:34.974477   39129 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1211 00:11:34.974487   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1211 00:11:34.974492   39129 command_runner.go:130] > # always happen on a node reboot
	I1211 00:11:34.974707   39129 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1211 00:11:34.974755   39129 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1211 00:11:34.974769   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1211 00:11:34.974774   39129 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1211 00:11:34.974951   39129 command_runner.go:130] > # version_file_persist = ""
	I1211 00:11:34.974999   39129 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1211 00:11:34.975014   39129 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1211 00:11:34.975286   39129 command_runner.go:130] > # internal_wipe = true
	I1211 00:11:34.975303   39129 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1211 00:11:34.975309   39129 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1211 00:11:34.975533   39129 command_runner.go:130] > # internal_repair = true
	I1211 00:11:34.975547   39129 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1211 00:11:34.975554   39129 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1211 00:11:34.975560   39129 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1211 00:11:34.975800   39129 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1211 00:11:34.975813   39129 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1211 00:11:34.975817   39129 command_runner.go:130] > [crio.api]
	I1211 00:11:34.975838   39129 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1211 00:11:34.976047   39129 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1211 00:11:34.976068   39129 command_runner.go:130] > # IP address on which the stream server will listen.
	I1211 00:11:34.976289   39129 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1211 00:11:34.976305   39129 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1211 00:11:34.976322   39129 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1211 00:11:34.976522   39129 command_runner.go:130] > # stream_port = "0"
	I1211 00:11:34.976537   39129 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1211 00:11:34.976743   39129 command_runner.go:130] > # stream_enable_tls = false
	I1211 00:11:34.976759   39129 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1211 00:11:34.976966   39129 command_runner.go:130] > # stream_idle_timeout = ""
	I1211 00:11:34.976981   39129 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1211 00:11:34.976987   39129 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977102   39129 command_runner.go:130] > # stream_tls_cert = ""
	I1211 00:11:34.977116   39129 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1211 00:11:34.977122   39129 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977375   39129 command_runner.go:130] > # stream_tls_key = ""
	I1211 00:11:34.977408   39129 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1211 00:11:34.977433   39129 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1211 00:11:34.977440   39129 command_runner.go:130] > # automatically pick up the changes.
	I1211 00:11:34.977571   39129 command_runner.go:130] > # stream_tls_ca = ""
	I1211 00:11:34.977641   39129 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977779   39129 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1211 00:11:34.977797   39129 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977991   39129 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1211 00:11:34.978007   39129 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1211 00:11:34.978040   39129 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1211 00:11:34.978056   39129 command_runner.go:130] > [crio.runtime]
	I1211 00:11:34.978069   39129 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1211 00:11:34.978076   39129 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1211 00:11:34.978080   39129 command_runner.go:130] > # "nofile=1024:2048"
	I1211 00:11:34.978086   39129 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1211 00:11:34.978208   39129 command_runner.go:130] > # default_ulimits = [
	I1211 00:11:34.978352   39129 command_runner.go:130] > # ]
	I1211 00:11:34.978369   39129 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1211 00:11:34.978551   39129 command_runner.go:130] > # no_pivot = false
	I1211 00:11:34.978566   39129 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1211 00:11:34.978572   39129 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1211 00:11:34.978723   39129 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1211 00:11:34.978739   39129 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1211 00:11:34.978744   39129 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1211 00:11:34.978775   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.978921   39129 command_runner.go:130] > # conmon = ""
	I1211 00:11:34.978933   39129 command_runner.go:130] > # Cgroup setting for conmon
	I1211 00:11:34.978941   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1211 00:11:34.979286   39129 command_runner.go:130] > conmon_cgroup = "pod"
	I1211 00:11:34.979301   39129 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1211 00:11:34.979307   39129 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1211 00:11:34.979343   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.979348   39129 command_runner.go:130] > # conmon_env = [
	I1211 00:11:34.979496   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979512   39129 command_runner.go:130] > # Additional environment variables to set for all the
	I1211 00:11:34.979518   39129 command_runner.go:130] > # containers. These are overridden if set in the
	I1211 00:11:34.979524   39129 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1211 00:11:34.979552   39129 command_runner.go:130] > # default_env = [
	I1211 00:11:34.979707   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979725   39129 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1211 00:11:34.979734   39129 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1211 00:11:34.979983   39129 command_runner.go:130] > # selinux = false
	I1211 00:11:34.980000   39129 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1211 00:11:34.980009   39129 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1211 00:11:34.980015   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980366   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.980414   39129 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1211 00:11:34.980429   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980434   39129 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1211 00:11:34.980447   39129 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1211 00:11:34.980453   39129 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1211 00:11:34.980464   39129 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1211 00:11:34.980471   39129 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1211 00:11:34.980493   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980499   39129 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1211 00:11:34.980514   39129 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1211 00:11:34.980524   39129 command_runner.go:130] > # the cgroup blockio controller.
	I1211 00:11:34.980678   39129 command_runner.go:130] > # blockio_config_file = ""
	I1211 00:11:34.980713   39129 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1211 00:11:34.980723   39129 command_runner.go:130] > # blockio parameters.
	I1211 00:11:34.980981   39129 command_runner.go:130] > # blockio_reload = false
	I1211 00:11:34.980995   39129 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1211 00:11:34.980999   39129 command_runner.go:130] > # irqbalance daemon.
	I1211 00:11:34.981198   39129 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1211 00:11:34.981209   39129 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1211 00:11:34.981217   39129 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1211 00:11:34.981265   39129 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1211 00:11:34.981385   39129 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1211 00:11:34.981396   39129 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1211 00:11:34.981402   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.981515   39129 command_runner.go:130] > # rdt_config_file = ""
	I1211 00:11:34.981525   39129 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1211 00:11:34.981657   39129 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1211 00:11:34.981668   39129 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1211 00:11:34.981795   39129 command_runner.go:130] > # separate_pull_cgroup = ""
	I1211 00:11:34.981809   39129 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1211 00:11:34.981816   39129 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1211 00:11:34.981820   39129 command_runner.go:130] > # will be added.
	I1211 00:11:34.981926   39129 command_runner.go:130] > # default_capabilities = [
	I1211 00:11:34.982055   39129 command_runner.go:130] > # 	"CHOWN",
	I1211 00:11:34.982151   39129 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1211 00:11:34.982256   39129 command_runner.go:130] > # 	"FSETID",
	I1211 00:11:34.982350   39129 command_runner.go:130] > # 	"FOWNER",
	I1211 00:11:34.982451   39129 command_runner.go:130] > # 	"SETGID",
	I1211 00:11:34.982543   39129 command_runner.go:130] > # 	"SETUID",
	I1211 00:11:34.982687   39129 command_runner.go:130] > # 	"SETPCAP",
	I1211 00:11:34.982695   39129 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1211 00:11:34.982819   39129 command_runner.go:130] > # 	"KILL",
	I1211 00:11:34.982949   39129 command_runner.go:130] > # ]
	I1211 00:11:34.982960   39129 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1211 00:11:34.982993   39129 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1211 00:11:34.983107   39129 command_runner.go:130] > # add_inheritable_capabilities = false
	I1211 00:11:34.983118   39129 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1211 00:11:34.983132   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983136   39129 command_runner.go:130] > default_sysctls = [
	I1211 00:11:34.983272   39129 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1211 00:11:34.983279   39129 command_runner.go:130] > ]
	I1211 00:11:34.983285   39129 command_runner.go:130] > # List of devices on the host that a
	I1211 00:11:34.983300   39129 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1211 00:11:34.983304   39129 command_runner.go:130] > # allowed_devices = [
	I1211 00:11:34.983428   39129 command_runner.go:130] > # 	"/dev/fuse",
	I1211 00:11:34.983527   39129 command_runner.go:130] > # 	"/dev/net/tun",
	I1211 00:11:34.983650   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983660   39129 command_runner.go:130] > # List of additional devices. specified as
	I1211 00:11:34.983668   39129 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1211 00:11:34.983680   39129 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1211 00:11:34.983687   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983813   39129 command_runner.go:130] > # additional_devices = [
	I1211 00:11:34.983820   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983826   39129 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1211 00:11:34.983923   39129 command_runner.go:130] > # cdi_spec_dirs = [
	I1211 00:11:34.984053   39129 command_runner.go:130] > # 	"/etc/cdi",
	I1211 00:11:34.984060   39129 command_runner.go:130] > # 	"/var/run/cdi",
	I1211 00:11:34.984160   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984177   39129 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1211 00:11:34.984184   39129 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1211 00:11:34.984195   39129 command_runner.go:130] > # Defaults to false.
	I1211 00:11:34.984334   39129 command_runner.go:130] > # device_ownership_from_security_context = false
	I1211 00:11:34.984345   39129 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1211 00:11:34.984355   39129 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1211 00:11:34.984488   39129 command_runner.go:130] > # hooks_dir = [
	I1211 00:11:34.984640   39129 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1211 00:11:34.984647   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984653   39129 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1211 00:11:34.984667   39129 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1211 00:11:34.984672   39129 command_runner.go:130] > # its default mounts from the following two files:
	I1211 00:11:34.984675   39129 command_runner.go:130] > #
	I1211 00:11:34.984681   39129 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1211 00:11:34.984694   39129 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1211 00:11:34.984700   39129 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1211 00:11:34.984703   39129 command_runner.go:130] > #
	I1211 00:11:34.984710   39129 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1211 00:11:34.984716   39129 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1211 00:11:34.984722   39129 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1211 00:11:34.984727   39129 command_runner.go:130] > #      only add mounts it finds in this file.
	I1211 00:11:34.984729   39129 command_runner.go:130] > #
	I1211 00:11:34.984883   39129 command_runner.go:130] > # default_mounts_file = ""
	I1211 00:11:34.984900   39129 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1211 00:11:34.984908   39129 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1211 00:11:34.985051   39129 command_runner.go:130] > # pids_limit = -1
	I1211 00:11:34.985062   39129 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1211 00:11:34.985075   39129 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1211 00:11:34.985083   39129 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1211 00:11:34.985091   39129 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1211 00:11:34.985222   39129 command_runner.go:130] > # log_size_max = -1
	I1211 00:11:34.985233   39129 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1211 00:11:34.985372   39129 command_runner.go:130] > # log_to_journald = false
	I1211 00:11:34.985382   39129 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1211 00:11:34.985404   39129 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1211 00:11:34.985411   39129 command_runner.go:130] > # Path to directory for container attach sockets.
	I1211 00:11:34.985416   39129 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1211 00:11:34.985422   39129 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1211 00:11:34.985425   39129 command_runner.go:130] > # bind_mount_prefix = ""
	I1211 00:11:34.985434   39129 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1211 00:11:34.985569   39129 command_runner.go:130] > # read_only = false
	I1211 00:11:34.985580   39129 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1211 00:11:34.985587   39129 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1211 00:11:34.985601   39129 command_runner.go:130] > # live configuration reload.
	I1211 00:11:34.985605   39129 command_runner.go:130] > # log_level = "info"
	I1211 00:11:34.985611   39129 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1211 00:11:34.985616   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.985619   39129 command_runner.go:130] > # log_filter = ""
	I1211 00:11:34.985626   39129 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985632   39129 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1211 00:11:34.985635   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985643   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985647   39129 command_runner.go:130] > # uid_mappings = ""
	I1211 00:11:34.985654   39129 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985660   39129 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1211 00:11:34.985664   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985672   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985681   39129 command_runner.go:130] > # gid_mappings = ""
	I1211 00:11:34.985688   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1211 00:11:34.985694   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985700   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985708   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985712   39129 command_runner.go:130] > # minimum_mappable_uid = -1
	I1211 00:11:34.985718   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1211 00:11:34.985723   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985729   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985737   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985741   39129 command_runner.go:130] > # minimum_mappable_gid = -1
	I1211 00:11:34.985747   39129 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1211 00:11:34.985753   39129 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1211 00:11:34.985759   39129 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1211 00:11:34.985975   39129 command_runner.go:130] > # ctr_stop_timeout = 30
	I1211 00:11:34.985988   39129 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1211 00:11:34.985994   39129 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1211 00:11:34.985999   39129 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1211 00:11:34.986004   39129 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1211 00:11:34.986008   39129 command_runner.go:130] > # drop_infra_ctr = true
	I1211 00:11:34.986014   39129 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1211 00:11:34.986019   39129 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1211 00:11:34.986029   39129 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1211 00:11:34.986033   39129 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1211 00:11:34.986040   39129 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1211 00:11:34.986046   39129 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1211 00:11:34.986051   39129 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1211 00:11:34.986057   39129 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1211 00:11:34.986060   39129 command_runner.go:130] > # shared_cpuset = ""
	I1211 00:11:34.986066   39129 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1211 00:11:34.986071   39129 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1211 00:11:34.986075   39129 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1211 00:11:34.986082   39129 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1211 00:11:34.986085   39129 command_runner.go:130] > # pinns_path = ""
	I1211 00:11:34.986091   39129 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1211 00:11:34.986098   39129 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1211 00:11:34.986101   39129 command_runner.go:130] > # enable_criu_support = true
	I1211 00:11:34.986107   39129 command_runner.go:130] > # Enable/disable the generation of the container,
	I1211 00:11:34.986112   39129 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1211 00:11:34.986116   39129 command_runner.go:130] > # enable_pod_events = false
	I1211 00:11:34.986122   39129 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1211 00:11:34.986131   39129 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1211 00:11:34.986135   39129 command_runner.go:130] > # default_runtime = "crun"
	I1211 00:11:34.986140   39129 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1211 00:11:34.986148   39129 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1211 00:11:34.986159   39129 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1211 00:11:34.986164   39129 command_runner.go:130] > # creation as a file is not desired either.
	I1211 00:11:34.986172   39129 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1211 00:11:34.986177   39129 command_runner.go:130] > # the hostname is being managed dynamically.
	I1211 00:11:34.986181   39129 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1211 00:11:34.986185   39129 command_runner.go:130] > # ]
	I1211 00:11:34.986192   39129 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1211 00:11:34.986198   39129 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1211 00:11:34.986205   39129 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1211 00:11:34.986210   39129 command_runner.go:130] > # Each entry in the table should follow the format:
	I1211 00:11:34.986212   39129 command_runner.go:130] > #
	I1211 00:11:34.986217   39129 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1211 00:11:34.986221   39129 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1211 00:11:34.986226   39129 command_runner.go:130] > # runtime_type = "oci"
	I1211 00:11:34.986231   39129 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1211 00:11:34.986235   39129 command_runner.go:130] > # inherit_default_runtime = false
	I1211 00:11:34.986240   39129 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1211 00:11:34.986244   39129 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1211 00:11:34.986248   39129 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1211 00:11:34.986251   39129 command_runner.go:130] > # monitor_env = []
	I1211 00:11:34.986256   39129 command_runner.go:130] > # privileged_without_host_devices = false
	I1211 00:11:34.986259   39129 command_runner.go:130] > # allowed_annotations = []
	I1211 00:11:34.986265   39129 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1211 00:11:34.986268   39129 command_runner.go:130] > # no_sync_log = false
	I1211 00:11:34.986272   39129 command_runner.go:130] > # default_annotations = {}
	I1211 00:11:34.986276   39129 command_runner.go:130] > # stream_websockets = false
	I1211 00:11:34.986279   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.986309   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.986315   39129 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1211 00:11:34.986324   39129 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1211 00:11:34.986330   39129 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1211 00:11:34.986337   39129 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1211 00:11:34.986340   39129 command_runner.go:130] > #   in $PATH.
	I1211 00:11:34.986346   39129 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1211 00:11:34.986350   39129 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1211 00:11:34.986356   39129 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1211 00:11:34.986359   39129 command_runner.go:130] > #   state.
	I1211 00:11:34.986366   39129 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1211 00:11:34.986375   39129 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1211 00:11:34.986381   39129 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1211 00:11:34.986387   39129 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1211 00:11:34.986392   39129 command_runner.go:130] > #   the values from the default runtime on load time.
	I1211 00:11:34.986398   39129 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1211 00:11:34.986404   39129 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1211 00:11:34.986410   39129 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1211 00:11:34.986417   39129 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1211 00:11:34.986421   39129 command_runner.go:130] > #   The currently recognized values are:
	I1211 00:11:34.986428   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1211 00:11:34.986435   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1211 00:11:34.986440   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1211 00:11:34.986446   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1211 00:11:34.986455   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1211 00:11:34.986462   39129 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1211 00:11:34.986469   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1211 00:11:34.986475   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1211 00:11:34.986481   39129 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1211 00:11:34.986487   39129 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1211 00:11:34.986494   39129 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1211 00:11:34.986500   39129 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1211 00:11:34.986505   39129 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1211 00:11:34.986511   39129 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1211 00:11:34.986517   39129 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1211 00:11:34.986528   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1211 00:11:34.986534   39129 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1211 00:11:34.986538   39129 command_runner.go:130] > #   deprecated option "conmon".
	I1211 00:11:34.986545   39129 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1211 00:11:34.986550   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1211 00:11:34.986556   39129 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1211 00:11:34.986561   39129 command_runner.go:130] > #   should be moved to the container's cgroup
	I1211 00:11:34.986567   39129 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1211 00:11:34.986572   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1211 00:11:34.986579   39129 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1211 00:11:34.986583   39129 command_runner.go:130] > #   conmon-rs by using:
	I1211 00:11:34.986591   39129 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1211 00:11:34.986598   39129 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1211 00:11:34.986606   39129 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1211 00:11:34.986613   39129 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1211 00:11:34.986618   39129 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1211 00:11:34.986625   39129 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1211 00:11:34.986633   39129 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1211 00:11:34.986641   39129 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1211 00:11:34.986651   39129 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1211 00:11:34.986658   39129 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1211 00:11:34.986662   39129 command_runner.go:130] > #   when a machine crash happens.
	I1211 00:11:34.986669   39129 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1211 00:11:34.986677   39129 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1211 00:11:34.986685   39129 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1211 00:11:34.986689   39129 command_runner.go:130] > #   seccomp profile for the runtime.
	I1211 00:11:34.986695   39129 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1211 00:11:34.986702   39129 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1211 00:11:34.986704   39129 command_runner.go:130] > #
	I1211 00:11:34.986708   39129 command_runner.go:130] > # Using the seccomp notifier feature:
	I1211 00:11:34.986711   39129 command_runner.go:130] > #
	I1211 00:11:34.986717   39129 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1211 00:11:34.986724   39129 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1211 00:11:34.986729   39129 command_runner.go:130] > #
	I1211 00:11:34.986739   39129 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1211 00:11:34.986745   39129 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1211 00:11:34.986748   39129 command_runner.go:130] > #
	I1211 00:11:34.986754   39129 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1211 00:11:34.986757   39129 command_runner.go:130] > # feature.
	I1211 00:11:34.986760   39129 command_runner.go:130] > #
	I1211 00:11:34.986766   39129 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1211 00:11:34.986772   39129 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1211 00:11:34.986778   39129 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1211 00:11:34.986784   39129 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1211 00:11:34.986790   39129 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1211 00:11:34.986792   39129 command_runner.go:130] > #
	I1211 00:11:34.986799   39129 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1211 00:11:34.986805   39129 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1211 00:11:34.986808   39129 command_runner.go:130] > #
	I1211 00:11:34.986814   39129 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1211 00:11:34.986820   39129 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1211 00:11:34.986822   39129 command_runner.go:130] > #
	I1211 00:11:34.986828   39129 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1211 00:11:34.986833   39129 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1211 00:11:34.986837   39129 command_runner.go:130] > # limitation.
	I1211 00:11:34.986842   39129 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1211 00:11:34.986846   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1211 00:11:34.986850   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986853   39129 command_runner.go:130] > runtime_root = "/run/crun"
	I1211 00:11:34.986857   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986860   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986864   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.986868   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.986872   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.986876   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.986880   39129 command_runner.go:130] > allowed_annotations = [
	I1211 00:11:34.986887   39129 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1211 00:11:34.986889   39129 command_runner.go:130] > ]
	I1211 00:11:34.986894   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.986898   39129 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1211 00:11:34.986902   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1211 00:11:34.986906   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986909   39129 command_runner.go:130] > runtime_root = "/run/runc"
	I1211 00:11:34.986913   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986917   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986921   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.987106   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.987121   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.987127   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.987132   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.987139   39129 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1211 00:11:34.987147   39129 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1211 00:11:34.987154   39129 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1211 00:11:34.987166   39129 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1211 00:11:34.987177   39129 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1211 00:11:34.987187   39129 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1211 00:11:34.987194   39129 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1211 00:11:34.987200   39129 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1211 00:11:34.987209   39129 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1211 00:11:34.987218   39129 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1211 00:11:34.987224   39129 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1211 00:11:34.987231   39129 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1211 00:11:34.987235   39129 command_runner.go:130] > # Example:
	I1211 00:11:34.987241   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1211 00:11:34.987246   39129 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1211 00:11:34.987251   39129 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1211 00:11:34.987255   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1211 00:11:34.987258   39129 command_runner.go:130] > # cpuset = "0-1"
	I1211 00:11:34.987262   39129 command_runner.go:130] > # cpushares = "5"
	I1211 00:11:34.987269   39129 command_runner.go:130] > # cpuquota = "1000"
	I1211 00:11:34.987273   39129 command_runner.go:130] > # cpuperiod = "100000"
	I1211 00:11:34.987277   39129 command_runner.go:130] > # cpulimit = "35"
	I1211 00:11:34.987280   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.987284   39129 command_runner.go:130] > # The workload name is workload-type.
	I1211 00:11:34.987292   39129 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1211 00:11:34.987298   39129 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1211 00:11:34.987303   39129 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1211 00:11:34.987311   39129 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1211 00:11:34.987317   39129 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1211 00:11:34.987322   39129 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1211 00:11:34.987328   39129 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1211 00:11:34.987332   39129 command_runner.go:130] > # Default value is set to true
	I1211 00:11:34.987336   39129 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1211 00:11:34.987342   39129 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1211 00:11:34.987346   39129 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1211 00:11:34.987350   39129 command_runner.go:130] > # Default value is set to 'false'
	I1211 00:11:34.987355   39129 command_runner.go:130] > # disable_hostport_mapping = false
	I1211 00:11:34.987361   39129 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1211 00:11:34.987369   39129 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1211 00:11:34.987372   39129 command_runner.go:130] > # timezone = ""
	I1211 00:11:34.987379   39129 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1211 00:11:34.987382   39129 command_runner.go:130] > #
	I1211 00:11:34.987387   39129 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1211 00:11:34.987393   39129 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1211 00:11:34.987396   39129 command_runner.go:130] > [crio.image]
	I1211 00:11:34.987402   39129 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1211 00:11:34.987407   39129 command_runner.go:130] > # default_transport = "docker://"
	I1211 00:11:34.987413   39129 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1211 00:11:34.987419   39129 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987423   39129 command_runner.go:130] > # global_auth_file = ""
	I1211 00:11:34.987428   39129 command_runner.go:130] > # The image used to instantiate infra containers.
	I1211 00:11:34.987432   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987442   39129 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.987448   39129 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1211 00:11:34.987454   39129 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987458   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987463   39129 command_runner.go:130] > # pause_image_auth_file = ""
	I1211 00:11:34.987468   39129 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1211 00:11:34.987478   39129 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1211 00:11:34.987484   39129 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1211 00:11:34.987489   39129 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1211 00:11:34.987505   39129 command_runner.go:130] > # pause_command = "/pause"
	I1211 00:11:34.987511   39129 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1211 00:11:34.987518   39129 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1211 00:11:34.987524   39129 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1211 00:11:34.987530   39129 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1211 00:11:34.987536   39129 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1211 00:11:34.987542   39129 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1211 00:11:34.987545   39129 command_runner.go:130] > # pinned_images = [
	I1211 00:11:34.987549   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987555   39129 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1211 00:11:34.987561   39129 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1211 00:11:34.987567   39129 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1211 00:11:34.987574   39129 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1211 00:11:34.987579   39129 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1211 00:11:34.987584   39129 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1211 00:11:34.987589   39129 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1211 00:11:34.987596   39129 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1211 00:11:34.987602   39129 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1211 00:11:34.987608   39129 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1211 00:11:34.987614   39129 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1211 00:11:34.987618   39129 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1211 00:11:34.987624   39129 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1211 00:11:34.987631   39129 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1211 00:11:34.987634   39129 command_runner.go:130] > # changing them here.
	I1211 00:11:34.987643   39129 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1211 00:11:34.987646   39129 command_runner.go:130] > # insecure_registries = [
	I1211 00:11:34.987651   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987657   39129 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1211 00:11:34.987662   39129 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1211 00:11:34.987666   39129 command_runner.go:130] > # image_volumes = "mkdir"
	I1211 00:11:34.987671   39129 command_runner.go:130] > # Temporary directory to use for storing big files
	I1211 00:11:34.987675   39129 command_runner.go:130] > # big_files_temporary_dir = ""
	I1211 00:11:34.987681   39129 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1211 00:11:34.987688   39129 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1211 00:11:34.987692   39129 command_runner.go:130] > # auto_reload_registries = false
	I1211 00:11:34.987698   39129 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1211 00:11:34.987706   39129 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1211 00:11:34.987711   39129 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1211 00:11:34.987715   39129 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1211 00:11:34.987719   39129 command_runner.go:130] > # The mode of short name resolution.
	I1211 00:11:34.987726   39129 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1211 00:11:34.987734   39129 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1211 00:11:34.987739   39129 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1211 00:11:34.987743   39129 command_runner.go:130] > # short_name_mode = "enforcing"
	I1211 00:11:34.987749   39129 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1211 00:11:34.987754   39129 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1211 00:11:34.987763   39129 command_runner.go:130] > # oci_artifact_mount_support = true
	I1211 00:11:34.987770   39129 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1211 00:11:34.987773   39129 command_runner.go:130] > # CNI plugins.
	I1211 00:11:34.987776   39129 command_runner.go:130] > [crio.network]
	I1211 00:11:34.987782   39129 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1211 00:11:34.987787   39129 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1211 00:11:34.987791   39129 command_runner.go:130] > # cni_default_network = ""
	I1211 00:11:34.987797   39129 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1211 00:11:34.987801   39129 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1211 00:11:34.987806   39129 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1211 00:11:34.987809   39129 command_runner.go:130] > # plugin_dirs = [
	I1211 00:11:34.987816   39129 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1211 00:11:34.987819   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987823   39129 command_runner.go:130] > # List of included pod metrics.
	I1211 00:11:34.987827   39129 command_runner.go:130] > # included_pod_metrics = [
	I1211 00:11:34.987830   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987837   39129 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1211 00:11:34.987840   39129 command_runner.go:130] > [crio.metrics]
	I1211 00:11:34.987845   39129 command_runner.go:130] > # Globally enable or disable metrics support.
	I1211 00:11:34.987849   39129 command_runner.go:130] > # enable_metrics = false
	I1211 00:11:34.987853   39129 command_runner.go:130] > # Specify enabled metrics collectors.
	I1211 00:11:34.987859   39129 command_runner.go:130] > # Per default all metrics are enabled.
	I1211 00:11:34.987865   39129 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1211 00:11:34.987871   39129 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1211 00:11:34.987877   39129 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1211 00:11:34.987880   39129 command_runner.go:130] > # metrics_collectors = [
	I1211 00:11:34.987884   39129 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1211 00:11:34.987888   39129 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1211 00:11:34.987892   39129 command_runner.go:130] > # 	"containers_oom_total",
	I1211 00:11:34.987895   39129 command_runner.go:130] > # 	"processes_defunct",
	I1211 00:11:34.987900   39129 command_runner.go:130] > # 	"operations_total",
	I1211 00:11:34.987904   39129 command_runner.go:130] > # 	"operations_latency_seconds",
	I1211 00:11:34.987908   39129 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1211 00:11:34.987912   39129 command_runner.go:130] > # 	"operations_errors_total",
	I1211 00:11:34.987916   39129 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1211 00:11:34.987920   39129 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1211 00:11:34.987924   39129 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1211 00:11:34.987928   39129 command_runner.go:130] > # 	"image_pulls_success_total",
	I1211 00:11:34.987932   39129 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1211 00:11:34.987936   39129 command_runner.go:130] > # 	"containers_oom_count_total",
	I1211 00:11:34.987942   39129 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1211 00:11:34.987946   39129 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1211 00:11:34.987950   39129 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1211 00:11:34.987953   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987962   39129 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1211 00:11:34.987967   39129 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1211 00:11:34.987972   39129 command_runner.go:130] > # The port on which the metrics server will listen.
	I1211 00:11:34.987975   39129 command_runner.go:130] > # metrics_port = 9090
	I1211 00:11:34.987980   39129 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1211 00:11:34.987984   39129 command_runner.go:130] > # metrics_socket = ""
	I1211 00:11:34.987989   39129 command_runner.go:130] > # The certificate for the secure metrics server.
	I1211 00:11:34.987994   39129 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1211 00:11:34.988001   39129 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1211 00:11:34.988005   39129 command_runner.go:130] > # certificate on any modification event.
	I1211 00:11:34.988008   39129 command_runner.go:130] > # metrics_cert = ""
	I1211 00:11:34.988013   39129 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1211 00:11:34.988018   39129 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1211 00:11:34.988021   39129 command_runner.go:130] > # metrics_key = ""
	I1211 00:11:34.988026   39129 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1211 00:11:34.988030   39129 command_runner.go:130] > [crio.tracing]
	I1211 00:11:34.988035   39129 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1211 00:11:34.988038   39129 command_runner.go:130] > # enable_tracing = false
	I1211 00:11:34.988044   39129 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1211 00:11:34.988050   39129 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1211 00:11:34.988056   39129 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1211 00:11:34.988061   39129 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1211 00:11:34.988064   39129 command_runner.go:130] > # CRI-O NRI configuration.
	I1211 00:11:34.988067   39129 command_runner.go:130] > [crio.nri]
	I1211 00:11:34.988071   39129 command_runner.go:130] > # Globally enable or disable NRI.
	I1211 00:11:34.988075   39129 command_runner.go:130] > # enable_nri = true
	I1211 00:11:34.988079   39129 command_runner.go:130] > # NRI socket to listen on.
	I1211 00:11:34.988083   39129 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1211 00:11:34.988087   39129 command_runner.go:130] > # NRI plugin directory to use.
	I1211 00:11:34.988091   39129 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1211 00:11:34.988095   39129 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1211 00:11:34.988100   39129 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1211 00:11:34.988108   39129 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1211 00:11:34.988171   39129 command_runner.go:130] > # nri_disable_connections = false
	I1211 00:11:34.988177   39129 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1211 00:11:34.988182   39129 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1211 00:11:34.988186   39129 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1211 00:11:34.988190   39129 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1211 00:11:34.988194   39129 command_runner.go:130] > # NRI default validator configuration.
	I1211 00:11:34.988201   39129 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1211 00:11:34.988207   39129 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1211 00:11:34.988211   39129 command_runner.go:130] > # can be restricted/rejected:
	I1211 00:11:34.988215   39129 command_runner.go:130] > # - OCI hook injection
	I1211 00:11:34.988220   39129 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1211 00:11:34.988225   39129 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1211 00:11:34.988229   39129 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1211 00:11:34.988233   39129 command_runner.go:130] > # - adjustment of linux namespaces
	I1211 00:11:34.988240   39129 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1211 00:11:34.988246   39129 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1211 00:11:34.988251   39129 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1211 00:11:34.988254   39129 command_runner.go:130] > #
	I1211 00:11:34.988258   39129 command_runner.go:130] > # [crio.nri.default_validator]
	I1211 00:11:34.988262   39129 command_runner.go:130] > # nri_enable_default_validator = false
	I1211 00:11:34.988267   39129 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1211 00:11:34.988272   39129 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1211 00:11:34.988277   39129 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1211 00:11:34.988282   39129 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1211 00:11:34.988287   39129 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1211 00:11:34.988291   39129 command_runner.go:130] > # nri_validator_required_plugins = [
	I1211 00:11:34.988294   39129 command_runner.go:130] > # ]
	I1211 00:11:34.988299   39129 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1211 00:11:34.988306   39129 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1211 00:11:34.988309   39129 command_runner.go:130] > [crio.stats]
	I1211 00:11:34.988316   39129 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1211 00:11:34.988321   39129 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1211 00:11:34.988324   39129 command_runner.go:130] > # stats_collection_period = 0
	I1211 00:11:34.988334   39129 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1211 00:11:34.988341   39129 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1211 00:11:34.988345   39129 command_runner.go:130] > # collection_period = 0
	I1211 00:11:34.988741   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943588402Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1211 00:11:34.988759   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943910852Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1211 00:11:34.988775   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944105801Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1211 00:11:34.988788   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944281599Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1211 00:11:34.988804   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944534263Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.988813   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944919976Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1211 00:11:34.988827   39129 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1211 00:11:34.988906   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:34.988923   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:34.988942   39129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:11:34.988966   39129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:11:34.989098   39129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:11:34.989171   39129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:11:34.996103   39129 command_runner.go:130] > kubeadm
	I1211 00:11:34.996124   39129 command_runner.go:130] > kubectl
	I1211 00:11:34.996130   39129 command_runner.go:130] > kubelet
	I1211 00:11:34.996965   39129 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:11:34.997027   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:11:35.004524   39129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:11:35.022259   39129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:11:35.035877   39129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1211 00:11:35.049665   39129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:11:35.053270   39129 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1211 00:11:35.053410   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:35.173051   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:35.663593   39129 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:11:35.663611   39129 certs.go:195] generating shared ca certs ...
	I1211 00:11:35.663626   39129 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:35.663843   39129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:11:35.663918   39129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:11:35.664081   39129 certs.go:257] generating profile certs ...
	I1211 00:11:35.664282   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:11:35.664361   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:11:35.664489   39129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:11:35.664502   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 00:11:35.664555   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 00:11:35.664574   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 00:11:35.664591   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 00:11:35.664636   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 00:11:35.664653   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 00:11:35.664664   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 00:11:35.664675   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 00:11:35.664773   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:11:35.664811   39129 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:11:35.664825   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:11:35.664885   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:11:35.664944   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:11:35.664975   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:11:35.665087   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:35.665126   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem -> /usr/share/ca-certificates/4875.pem
	I1211 00:11:35.665138   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.665177   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.666144   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:11:35.692413   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:11:35.716263   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:11:35.735120   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:11:35.753386   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:11:35.771269   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:11:35.789331   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:11:35.806153   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:11:35.823663   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:11:35.840043   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:11:35.857281   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:11:35.874656   39129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:11:35.887595   39129 ssh_runner.go:195] Run: openssl version
	I1211 00:11:35.893373   39129 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1211 00:11:35.893766   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.901331   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:11:35.908770   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912293   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912332   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912381   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.953295   39129 command_runner.go:130] > 3ec20f2e
	I1211 00:11:35.953382   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:11:35.960497   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.967487   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:11:35.974778   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978822   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978856   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978928   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:36.019575   39129 command_runner.go:130] > b5213941
	I1211 00:11:36.020060   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:11:36.028538   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.036748   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:11:36.045277   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049492   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049553   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049672   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.092814   39129 command_runner.go:130] > 51391683
	I1211 00:11:36.093356   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:11:36.101223   39129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105165   39129 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105191   39129 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1211 00:11:36.105198   39129 command_runner.go:130] > Device: 259,1	Inode: 1312480     Links: 1
	I1211 00:11:36.105205   39129 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:36.105212   39129 command_runner.go:130] > Access: 2025-12-11 00:07:28.485872476 +0000
	I1211 00:11:36.105217   39129 command_runner.go:130] > Modify: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105222   39129 command_runner.go:130] > Change: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105228   39129 command_runner.go:130] >  Birth: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105288   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:11:36.146158   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.146663   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:11:36.187479   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.187576   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:11:36.228130   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.228568   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:11:36.269072   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.269532   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:11:36.310317   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.310832   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:11:36.353606   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.354067   39129 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:36.354163   39129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:11:36.354246   39129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:11:36.382480   39129 cri.go:89] found id: ""
	I1211 00:11:36.382557   39129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:11:36.389756   39129 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1211 00:11:36.389777   39129 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1211 00:11:36.389784   39129 command_runner.go:130] > /var/lib/minikube/etcd:
	I1211 00:11:36.390708   39129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:11:36.390737   39129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:11:36.390806   39129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:11:36.398342   39129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:11:36.398732   39129 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-786978" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.398833   39129 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-2739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-786978" cluster setting kubeconfig missing "functional-786978" context setting]
	I1211 00:11:36.399137   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.399560   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.399714   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.400253   39129 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 00:11:36.400273   39129 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 00:11:36.400281   39129 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 00:11:36.400286   39129 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 00:11:36.400291   39129 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 00:11:36.400594   39129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:11:36.400697   39129 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1211 00:11:36.409983   39129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1211 00:11:36.410015   39129 kubeadm.go:602] duration metric: took 19.271635ms to restartPrimaryControlPlane
	I1211 00:11:36.410025   39129 kubeadm.go:403] duration metric: took 55.966406ms to StartCluster
	I1211 00:11:36.410041   39129 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410105   39129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.410754   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410951   39129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 00:11:36.411375   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:36.411428   39129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 00:11:36.411496   39129 addons.go:70] Setting storage-provisioner=true in profile "functional-786978"
	I1211 00:11:36.411509   39129 addons.go:239] Setting addon storage-provisioner=true in "functional-786978"
	I1211 00:11:36.411539   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.412103   39129 addons.go:70] Setting default-storageclass=true in profile "functional-786978"
	I1211 00:11:36.412128   39129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-786978"
	I1211 00:11:36.412372   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.412555   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.416027   39129 out.go:179] * Verifying Kubernetes components...
	I1211 00:11:36.418962   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:36.445616   39129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 00:11:36.448584   39129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.448615   39129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 00:11:36.448687   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.455632   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.455806   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.456398   39129 addons.go:239] Setting addon default-storageclass=true in "functional-786978"
	I1211 00:11:36.456432   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.459345   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.488078   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.511255   39129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:36.511282   39129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 00:11:36.511350   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.540894   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.608214   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:36.665748   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.679982   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.404051   39129 node_ready.go:35] waiting up to 6m0s for node "functional-786978" to be "Ready" ...
	I1211 00:11:37.404239   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.404634   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404742   39129 retry.go:31] will retry after 310.125043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404824   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404858   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404893   39129 retry.go:31] will retry after 141.721995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404991   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:37.547464   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.613487   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.613562   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.613592   39129 retry.go:31] will retry after 561.758211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.715754   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:37.779510   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.779557   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.779585   39129 retry.go:31] will retry after 505.869102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.904810   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.904884   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.175539   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.243137   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.243185   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.243204   39129 retry.go:31] will retry after 361.539254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.286533   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:38.344606   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.348111   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.348157   39129 retry.go:31] will retry after 829.218438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.404431   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.404511   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.404881   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.605429   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.661283   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.664833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.664864   39129 retry.go:31] will retry after 800.266997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.905185   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.905301   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.905646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:39.178251   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:39.238429   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.238472   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.238493   39129 retry.go:31] will retry after 1.184749907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.405001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.405348   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:39.405424   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:39.465581   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:39.526474   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.526525   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.526544   39129 retry.go:31] will retry after 1.807004704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.905105   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.905423   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.405603   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.423936   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:40.495739   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:40.495794   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.495811   39129 retry.go:31] will retry after 1.404783651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.334388   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:41.396786   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.396852   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.396891   39129 retry.go:31] will retry after 1.10995967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.405068   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.405184   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.405534   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:41.405602   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:41.901437   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:41.905007   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.905077   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.905313   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.984043   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.984104   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.984123   39129 retry.go:31] will retry after 1.551735429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.404784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:42.507069   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:42.562010   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:42.565655   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.565695   39129 retry.go:31] will retry after 1.834850552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.904273   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.904413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.904767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.404422   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.536095   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:43.596578   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:43.596618   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.596641   39129 retry.go:31] will retry after 3.759083682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.905026   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.905109   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.905424   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:43.905474   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:44.401015   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:44.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.404608   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:44.466004   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:44.470131   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.470162   39129 retry.go:31] will retry after 3.734519465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.904450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.904746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.404448   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.404610   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.405391   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.905314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.905389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.905730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:45.905817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:46.404489   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.404597   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.404850   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:46.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.904888   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.905184   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.356864   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:47.404412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.420245   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:47.420295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.420315   39129 retry.go:31] will retry after 2.851566945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.904846   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.904912   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.905167   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:48.205865   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:48.269575   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:48.269614   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.269633   39129 retry.go:31] will retry after 3.250947796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.404858   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.404932   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.405259   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:48.405314   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:48.905121   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.905582   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.404258   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.404342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.272194   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:50.327238   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:50.331229   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.331261   39129 retry.go:31] will retry after 4.377849152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.404603   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.404681   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.404972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.904412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:50.904763   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:51.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.404469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:51.521211   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:51.575865   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:51.579753   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.579788   39129 retry.go:31] will retry after 10.380601314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.905566   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.405257   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.405613   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.904681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:53.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:53.404852   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:53.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.904440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.904804   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.404471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.404754   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.709241   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:54.767641   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:54.771055   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.771086   39129 retry.go:31] will retry after 5.957769887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.904303   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.904730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.404312   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.404383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.404693   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.904394   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:55.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:56.404616   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.404692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.405015   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:56.904919   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.904989   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.905263   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.405131   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.905419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.905761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:57.905821   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:58.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.404407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.404667   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:58.904372   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.404718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.904404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:00.404425   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.404531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.404943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:00.405022   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:00.729113   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:00.791242   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:00.794799   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.794830   39129 retry.go:31] will retry after 11.484844112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.905270   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.405214   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.405547   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.904696   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.904770   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.905114   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.961328   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:02.020749   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:02.024939   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.024971   39129 retry.go:31] will retry after 14.651232328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:02.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:02.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:03.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:03.904466   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.904548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.404457   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.404546   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.904381   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.904772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:04.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:05.404564   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.404650   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.405040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:05.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.404608   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.404684   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.405046   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.905071   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.905390   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:06.905442   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:07.405193   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.405265   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.405584   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:07.904280   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.904352   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.404398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.904498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:09.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.404791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:09.404848   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:09.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.404523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.904428   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.904505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.904831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:11.904892   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:12.280537   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:12.342793   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:12.342833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.342853   39129 retry.go:31] will retry after 23.205348466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.405205   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.405602   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:12.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.904717   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.404271   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.905297   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.905373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.905750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:13.905805   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:14.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:14.904352   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.904734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.904784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:16.404614   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.404686   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.405057   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:16.405114   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:16.676815   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:16.732715   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:16.736183   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.736213   39129 retry.go:31] will retry after 30.816141509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.404776   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.904286   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.904361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.904615   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.404395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.904448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.904755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:18.904810   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:19.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.404533   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:19.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.904394   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.904694   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:21.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.404473   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:21.404887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:21.904789   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.904874   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.905204   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.405273   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.905073   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.905146   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.905464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:23.405279   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.405347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.405687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:23.405741   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:23.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.904659   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.404824   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.404296   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.904463   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.904801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:25.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:26.404631   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.404718   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.405047   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:26.904918   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.904987   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.905309   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.405154   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.405588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.904400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:28.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.404689   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:28.404748   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:28.904331   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.904750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.404573   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.404959   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.904646   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.904725   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.905092   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:30.404773   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.404846   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.405165   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:30.405221   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:30.904956   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.905034   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.905377   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.405001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.405072   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.405325   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.905650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.904301   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.904387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.904648   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:32.904697   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:33.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.404825   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:33.904520   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.904591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.404711   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.904339   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.904412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:34.904798   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:35.404390   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:35.549321   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:35.607106   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:35.610743   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.610780   39129 retry.go:31] will retry after 16.241459848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.905109   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.905200   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.905468   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.404514   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.904881   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.905210   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:36.905281   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:37.404334   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:37.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.904509   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.904813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.404408   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.404481   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:39.404416   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.404510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:39.404920   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:39.904654   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.904746   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.905070   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.404756   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.404825   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.905026   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.905372   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:41.405159   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.405236   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.405596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:41.405654   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:41.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.904410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.404773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.904495   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.904570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.404638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:43.904791   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:44.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:44.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.904643   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.405043   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.405120   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.905241   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.905313   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.905665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:45.905721   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:46.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.404665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:46.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.904443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.904803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.404531   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.404614   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.404913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.553376   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:47.607763   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:47.611288   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.611317   39129 retry.go:31] will retry after 35.21019071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.904951   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.905249   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:48.405085   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.405161   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.405471   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:48.405525   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:48.905284   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.905364   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.905681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.405295   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.405377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.405636   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.404362   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.904691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:50.904742   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:51.404407   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.404485   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.404838   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.852477   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:51.904839   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.904910   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.905174   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.907207   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910785   39129 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:12:52.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:52.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.904765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:52.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:53.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.404945   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:53.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.904430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.404439   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.904458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:55.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.404347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:55.404733   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:55.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.404550   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.404631   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.404976   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.904790   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.904860   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.905139   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:57.404944   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.405013   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.405350   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:57.405406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:57.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.905273   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.905640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.405189   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.405260   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.405511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.905275   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.905353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.905724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.404712   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.904425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:59.904732   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:00.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.404486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:00.904965   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.905043   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.905388   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.405097   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.405176   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.405439   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.904725   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.904806   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.905152   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:01.905207   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:02.404978   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.405084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.405396   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:02.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.905264   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.905532   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.405309   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.405405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.405763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:04.404467   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.404555   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:04.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:04.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.904484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.904554   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.904870   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:06.404545   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.404613   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.404937   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:06.404991   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:06.904732   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.904814   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.905130   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.404806   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.404877   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.405129   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.904906   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.904976   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:08.405133   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.405212   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.405523   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:08.405575   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:08.905290   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.905357   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.404766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.904501   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.904588   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.904943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.404293   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.404651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:10.904861   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:11.404508   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.404642   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:11.904763   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.904841   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.905096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.404345   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.904307   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.904388   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:13.404447   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:13.404835   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.904421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.904745   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.404439   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.904637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.404367   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.904488   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.904581   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:15.904954   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:16.404512   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.404576   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.404846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:16.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.904870   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.404863   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.404963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.405289   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.905011   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.905075   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.905318   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:17.905356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:18.405098   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.405169   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.405467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:18.905238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.905637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.404323   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.904449   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.904524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.904900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:20.404601   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.405009   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:20.405059   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:20.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.904383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.904630   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.404435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.904577   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.904658   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.905033   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.404711   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.404786   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.405042   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.821681   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:13:22.876683   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880396   39129 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:13:22.883693   39129 out.go:179] * Enabled addons: 
	I1211 00:13:22.887530   39129 addons.go:530] duration metric: took 1m46.476102717s for enable addons: enabled=[]
	I1211 00:13:22.904608   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.904678   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.904957   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:22.905000   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:23.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:23.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.404395   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.904476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.904551   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.904854   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:25.404225   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.404302   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.404557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:25.404605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:25.905344   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.905433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.905756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.404719   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.405097   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.904886   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.904949   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:27.404947   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.405328   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:27.405384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:27.905093   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.905485   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.405246   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.405317   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.405598   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.904844   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.904917   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.905225   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:29.405028   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.405117   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.405404   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:29.405449   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:29.905168   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.905247   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.905504   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.405258   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.405331   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.405639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.404468   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.404537   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.904867   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.905218   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:31.905275   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:32.405039   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.405110   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:32.905101   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.905197   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.905510   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.405238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.405316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.905361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.905671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:33.905728   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:34.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.404382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.404620   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:34.904316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.904389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.904718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.404512   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.904617   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.904692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.908415   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:13:35.908524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:36.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:36.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.404692   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.404758   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.405006   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.904670   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.904745   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.905089   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:38.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.404992   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.405353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:38.405405   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:38.905139   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.905213   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.905467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.405228   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.405305   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.904766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.404302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.404373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.904753   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:40.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:41.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.404779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:41.904302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.904720   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:43.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.404647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:43.404698   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:43.904384   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.904781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.404476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.904695   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:45.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:45.404841   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:45.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.904815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.404520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.904418   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.904492   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.904688   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:47.904737   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:48.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.404478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.404831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:48.904535   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.904627   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.904963   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.404424   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.404502   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.904361   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:49.904846   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:50.404484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.404567   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:50.904312   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.904631   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.404397   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.904771   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.904845   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.905178   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:51.905230   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:52.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.404412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:52.904443   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.904515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.904867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.404636   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.404950   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.904654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:54.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:54.904552   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.404658   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.404733   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.405025   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:56.404581   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.404661   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.404984   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:56.405049   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:56.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.404988   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.405064   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.405398   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.905216   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.905575   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.404252   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.404664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.904398   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.904491   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:58.904844   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:59.404513   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.404873   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:59.904538   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.904626   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.904952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.404414   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.404845   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.904385   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.904466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.904782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:01.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.404702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:01.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:01.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.404443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.904253   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.904328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.904579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.404289   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:03.904802   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:04.404326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.404677   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:04.904415   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.904294   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.904370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.904638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:06.404572   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.404651   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.404978   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:06.405038   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:06.904928   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.905005   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.905317   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.405083   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.405159   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.905272   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.905606   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:08.405305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.405379   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.405705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:08.405759   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:08.904405   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.904478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.404479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.404900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.904480   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.904557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.904874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.404389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.904442   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.904520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.904925   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:10.904988   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:11.404652   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.404728   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.405053   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:11.904891   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.904965   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.905216   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.405049   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.405126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.405453   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.905247   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.905323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.905654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:12.905713   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:13.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.404632   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:13.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.404802   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.904475   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.904544   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:15.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:15.404807   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:15.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.404677   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.404753   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.405004   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.904978   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.905048   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:17.405158   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.405235   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.405552   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:17.405610   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:17.904261   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.904334   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.404411   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.404498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.404847   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.904472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.904738   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.404737   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.904453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.904768   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:19.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:20.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.404818   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:20.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.904822   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.404460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.404763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:22.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.404422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.404708   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:22.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:22.904479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.904556   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.904841   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.404574   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.904305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.904373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.404251   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.404672   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.904409   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.904486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:24.904887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:25.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.404461   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.404736   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:25.904509   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.904583   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.404731   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.404818   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.904985   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.905061   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.905327   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:26.905366   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:27.405132   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.405207   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:27.905312   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.905383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.905699   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.404639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:29.404330   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:29.404817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:29.904445   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.904517   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.904836   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.404772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:31.404452   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.404538   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.404813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:31.404867   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:31.904825   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.904902   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.905256   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.405133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.405434   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.905146   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.905216   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.905460   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:33.405223   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.405303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.405614   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:33.405669   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:33.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.904380   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.904353   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.404418   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.904262   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.904332   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:35.904703   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:36.404548   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.404942   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:36.904920   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.905001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.405180   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.405250   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.405549   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:37.904735   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:38.404400   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:38.904471   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.904540   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.904868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.404739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:40.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.404655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:40.404705   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:40.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.404749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.904650   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.904717   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.904964   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:42.404693   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.404775   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.405115   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:42.405176   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:42.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.905044   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.905384   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.405173   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.405244   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.405506   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.904350   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.904566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:44.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:45.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.404848   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:45.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.404605   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.404878   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.905004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.905351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:46.905406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:47.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.405597   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:47.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.904346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.904600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.404314   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.904530   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.904960   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:49.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:49.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:49.904389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.904823   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.404429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.404707   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.904397   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.904467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:51.904876   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:52.404539   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.404611   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.404868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:52.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.904488   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.404507   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.404909   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.904751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:54.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:54.404785   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:54.905091   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.905461   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.405287   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.405536   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.905310   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.905400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.905792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:56.404664   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.404738   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.405079   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:56.405134   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:56.904863   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.904929   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.905177   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.404950   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.405032   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.405383   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.905061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.905135   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.905490   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:58.405233   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.405306   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.405559   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:58.405605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:58.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.904345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.404487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.404786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.904269   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.904338   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.904596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.404353   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.904439   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.904522   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.904908   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:00.904971   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:01.404441   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:01.904840   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.904916   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.905261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.405074   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.405158   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.405505   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.905255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.905626   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:02.905685   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:03.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:03.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.904501   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.904287   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.904363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.904668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:05.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:05.404809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:05.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.904390   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.404538   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.404621   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.404968   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.905084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.905399   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:07.405134   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.405202   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.405455   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:07.405496   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:07.905236   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.905316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.905668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.404259   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.404335   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.404669   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.904348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.904675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.404767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.904456   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.904528   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.904872   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:09.904926   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:10.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.404420   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:10.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.904438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.404399   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.904304   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.904386   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.904651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:12.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.404820   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:12.404875   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:12.904549   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.904630   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.405324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.405622   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.904354   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.404751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:14.904865   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:15.404372   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:15.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.404554   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.904714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.905117   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:16.905186   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:17.404926   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.404997   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.405333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:17.905100   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.905177   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.905446   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.405312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.405665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.904275   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.904355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:19.404415   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.404483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:19.404886   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:19.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.904600   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.904972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:21.404647   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.405062   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:21.405116   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:21.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.905031   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.405138   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.405205   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.905211   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.905339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.905644   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.404356   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.404765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.904406   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:23.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:24.404449   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:24.904567   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.904647   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.904980   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.404591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.404896   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:25.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:26.404543   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.404952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:26.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.905041   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.404714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.404795   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.904869   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.904942   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.905254   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:27.905309   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:28.405022   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.405096   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.405402   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:28.905177   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.905254   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.404313   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.404393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.904395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.904647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:30.404357   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:30.404784   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:30.904438   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.904510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.904846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.404410   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.404482   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.404742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.904715   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.905138   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:32.404902   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.404973   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.405298   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:32.405356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:32.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.905100   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.905353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.405141   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.405225   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.405565   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.904778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.404473   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.404543   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.404861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.904347   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.904417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:34.904828   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:35.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:35.904565   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.904641   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.904947   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.404649   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.404729   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.405029   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.904816   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.904901   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.905206   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:36.905255   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:37.404887   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.404952   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.405287   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:37.904915   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.904985   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.905278   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.405464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.905056   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.905124   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.905378   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:38.905418   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:39.405219   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.405647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:39.904336   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.404291   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.404607   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.904308   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:41.404428   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.404503   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:41.404925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:41.904306   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.904378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.904685   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.404364   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:43.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.405484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:43.405524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:43.905240   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.905312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.905656   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.404373   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.404764   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.904503   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.904579   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.904930   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:45.905003   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:46.404675   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.404755   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.405031   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:46.904971   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.905045   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.905387   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.405184   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.405266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.405600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.904290   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.904358   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:48.404282   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.404778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:48.404837   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:48.904518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.904616   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.904965   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.404851   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.904395   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.904468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.904810   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.904414   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.904743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:50.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:51.404343   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.404705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:51.904585   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.904663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.904998   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.404551   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.404875   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:53.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:53.404819   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:53.904321   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.904670   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.404457   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.904444   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.904531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.904876   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:55.904925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:56.404575   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.404977   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:56.904836   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.904913   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.404951   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.405027   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.405355   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.905048   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.905133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.905458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:57.905511   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:58.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:58.905188   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.905266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.905560   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.404349   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.404684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.904359   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.904655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:00.404418   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.404515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.404905   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:00.404957   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:00.904950   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.905035   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.905354   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.405093   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.405167   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:02.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.404580   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:02.404978   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:02.904523   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.904595   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.904914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.404782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.904572   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.904928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.404607   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.404926   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.904614   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.905032   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:04.905090   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:05.404755   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.404828   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.405160   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:05.904838   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.904911   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.905161   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.405082   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.904483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:07.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.404812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:07.404870   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:07.904508   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.904584   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.404619   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.404701   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.405096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.904962   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:09.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.405142   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.405475   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:09.405528   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:09.905172   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.905256   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.905577   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.404232   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.404303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.404506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.904829   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.904896   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.905193   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:11.905238   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:12.405036   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.405112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:12.905320   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.905392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.905721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:14.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.404817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:14.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:14.904559   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.904931   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.904365   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.904442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:16.404605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.404941   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:16.404981   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:16.905031   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.905112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.905444   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.405328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.405654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.904661   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.404405   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.404476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.904504   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.904587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:18.904955   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:19.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.404743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:19.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.404391   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:21.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.404792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:21.404845   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:21.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.904419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.404420   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.404499   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.904477   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.904552   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.904882   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:23.404436   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.404513   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:23.404895   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:23.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.404381   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.404453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.904499   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.904892   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:25.404586   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.404659   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:25.404961   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:25.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.904779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.404592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.404907   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.905185   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:27.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.405019   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.405351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:27.405411   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:27.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.905014   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.905324   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.405117   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.405196   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.405450   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.905212   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.905284   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.404346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.404683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.904353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:29.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:30.404469   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.404548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.404898   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:30.904597   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.904677   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.904983   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.904776   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.904848   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.905203   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:31.905262   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:32.405016   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.405089   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.405412   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:32.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.905517   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.405261   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.405640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.905301   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.905376   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.905691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:33.905753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:34.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:34.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.904476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.404634   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.404928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.904571   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.904645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.904953   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:36.404628   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.405010   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:36.405063   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:36.904947   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.905022   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.405100   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.405170   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.405498   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.905264   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.905342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.905865   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:38.404591   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.404679   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.405054   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:38.405110   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:38.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.904721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.904517   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.904592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.404740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.904441   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:40.904783   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:41.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:41.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.904798   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.905060   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:42.904822   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:43.404423   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:43.904350   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.904451   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.404461   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.404566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.904550   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.904637   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.904902   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:44.904951   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:45.404609   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:45.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.404515   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.405024   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.904957   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.905029   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:46.905384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:47.405088   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.405172   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:47.905049   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.905126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.905389   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.405194   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.405268   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.405562   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.905286   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.905355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.905692   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:48.905744   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:49.404266   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:49.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.404462   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.404547   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.904650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:51.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.404729   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:51.404787   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:51.904747   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.904831   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.404930   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.405004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.405261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.904990   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.905058   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.905363   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:53.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.405240   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.405638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:53.405695   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:53.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.905363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.905633   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.404809   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.904605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.904687   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.905040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.404740   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.404817   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.405074   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.904834   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:55.904885   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:56.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.404722   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.405063   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:56.904894   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.904963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.905258   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.405073   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.405147   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.405497   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.905415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.905765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:57.905820   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:58.404453   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.404534   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.404874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:58.904362   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.404465   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.404914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.904595   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.904668   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.904932   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:00.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.404598   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.404940   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:00.404990   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:00.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.905010   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.905344   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.405546   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.904448   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.904523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.904989   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:02.404570   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.404663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.405065   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:02.405126   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:02.904936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.905055   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.905428   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.405280   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.405375   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.405771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.904496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.904861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:04.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:05.404438   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.404897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:05.904937   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.905042   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.905515   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.404643   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.404731   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.405084   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.904983   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.905060   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.905437   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:06.905502   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:07.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.405297   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.405568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:07.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.404553   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.404970   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.904274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.904585   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:09.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.404363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:09.404710   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:09.904270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.904653   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.405242   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.405315   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.405609   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.904309   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.904702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:11.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:11.404842   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:11.904759   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.904838   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.905118   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.404893   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.404967   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.405296   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.905088   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.905195   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.905511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.405329   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.405579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:13.405619   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:13.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.904761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.404335   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.404408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.404714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.904317   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.904641   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.404300   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.404706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.904411   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.904487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.904783   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:15.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:16.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.404582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:16.904777   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.904859   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.404976   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.405047   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.405346   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.905085   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.905148   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.905484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:17.905576   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:18.405320   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.405400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.405752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:18.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.904452   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.404450   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.404524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.404839   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.904351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:20.404466   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:20.404929   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:20.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.404759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.404295   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.404368   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.404715   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.904484   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:22.904863   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:23.404333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.404410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.404731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:23.904406   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.404371   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.904474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:24.904898   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:25.404303   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.404370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:25.904598   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.904676   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.905012   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.404650   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.405090   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.904820   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.904890   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.905169   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:26.905212   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:27.404936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.405356   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:27.905133   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.905529   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.405274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.405341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.405686   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:29.404458   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.404541   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:29.404943   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:29.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.904367   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.904684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.404462   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.904507   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.904891   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.404374   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.904703   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.904772   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.908235   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:17:31.908301   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:32.405044   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.405123   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.405443   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:32.905095   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.905166   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.905421   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.405170   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.405251   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.405557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.905635   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:34.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.404675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:34.404722   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:34.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.904444   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.904434   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.904506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.904785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:36.404582   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.404662   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.404987   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:36.405043   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:36.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.904799   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:37.404341   39129 type.go:168] "Request Body" body=""
	I1211 00:17:37.404399   39129 node_ready.go:38] duration metric: took 6m0.000266247s for node "functional-786978" to be "Ready" ...
	I1211 00:17:37.407624   39129 out.go:203] 
	W1211 00:17:37.410619   39129 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1211 00:17:37.410819   39129 out.go:285] * 
	W1211 00:17:37.413036   39129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:17:37.415867   39129 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:17:46 functional-786978 crio[5370]: time="2025-12-11T00:17:46.590803114Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f23ab746-5e09-454b-b24c-0b20fc05e27d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678058271Z" level=info msg="Checking image status: minikube-local-cache-test:functional-786978" id=de1b9260-4270-4c82-a556-4da52de7aaa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678233898Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678278469Z" level=info msg="Image minikube-local-cache-test:functional-786978 not found" id=de1b9260-4270-4c82-a556-4da52de7aaa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678349716Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-786978 found" id=de1b9260-4270-4c82-a556-4da52de7aaa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.701597253Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-786978" id=3702fa98-e72c-4791-85ca-7a37ea3c58f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.701742823Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-786978 not found" id=3702fa98-e72c-4791-85ca-7a37ea3c58f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.701784071Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-786978 found" id=3702fa98-e72c-4791-85ca-7a37ea3c58f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.727359319Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-786978" id=0b3d3d42-04c9-47d7-ad28-0ea224eaa5de name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.727499048Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-786978 not found" id=0b3d3d42-04c9-47d7-ad28-0ea224eaa5de name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.727540353Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-786978 found" id=0b3d3d42-04c9-47d7-ad28-0ea224eaa5de name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:48 functional-786978 crio[5370]: time="2025-12-11T00:17:48.720794223Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=6a85b45b-7853-403d-b6ec-d04782984a27 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.05478202Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c1743bf4-c4a4-47b4-af83-81ad5ecdd1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.054933146Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c1743bf4-c4a4-47b4-af83-81ad5ecdd1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.055097326Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c1743bf4-c4a4-47b4-af83-81ad5ecdd1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.663448709Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b3dca044-c6c4-4a84-bb69-320442f6d378 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.663611642Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b3dca044-c6c4-4a84-bb69-320442f6d378 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.663660817Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b3dca044-c6c4-4a84-bb69-320442f6d378 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.686768765Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d0c9f349-6e60-4f21-a5aa-0d3aed676c9b name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.686929852Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=d0c9f349-6e60-4f21-a5aa-0d3aed676c9b name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.687160389Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=d0c9f349-6e60-4f21-a5aa-0d3aed676c9b name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.71135857Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=269b143d-89ae-42d3-8432-96192751bf0f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.711512108Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=269b143d-89ae-42d3-8432-96192751bf0f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.711559978Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=269b143d-89ae-42d3-8432-96192751bf0f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:50 functional-786978 crio[5370]: time="2025-12-11T00:17:50.262468188Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=32fde6db-d5df-4745-80a7-7d1fc07583ef name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:17:51.813062    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:51.813494    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:51.815242    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:51.815711    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:51.817128    9413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:17:51 up 29 min,  0 user,  load average: 0.41, 0.31, 0.47
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:17:49 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:50 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 11 00:17:50 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:50 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:50 functional-786978 kubelet[9290]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:50 functional-786978 kubelet[9290]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:50 functional-786978 kubelet[9290]: E1211 00:17:50.212301    9290 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:50 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:50 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:50 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 11 00:17:50 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:50 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:50 functional-786978 kubelet[9326]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:50 functional-786978 kubelet[9326]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:51 functional-786978 kubelet[9326]: E1211 00:17:51.000581    9326 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:51 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:51 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:51 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 11 00:17:51 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:51 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:51 functional-786978 kubelet[9391]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:51 functional-786978 kubelet[9391]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:51 functional-786978 kubelet[9391]: E1211 00:17:51.715342    9391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:51 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:51 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (342.329744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-786978 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-786978 get pods: exit status 1 (107.048235ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-786978 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (316.666373ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 logs -n 25: (1.006353361s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-976823 image ls --format short --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format json --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ ssh     │ functional-976823 ssh pgrep buildkitd                                                                                                             │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ image   │ functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr                                            │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls                                                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format yaml --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format table --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ delete  │ -p functional-976823                                                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ start   │ -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ start   │ -p functional-786978 --alsologtostderr -v=8                                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:11 UTC │                     │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:latest                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add minikube-local-cache-test:functional-786978                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache delete minikube-local-cache-test:functional-786978                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl images                                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ cache   │ functional-786978 cache reload                                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ kubectl │ functional-786978 kubectl -- --context functional-786978 get pods                                                                                 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:11:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:11:31.563230   39129 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:11:31.563658   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563678   39129 out.go:374] Setting ErrFile to fd 2...
	I1211 00:11:31.563685   39129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:11:31.563986   39129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:11:31.564407   39129 out.go:368] Setting JSON to false
	I1211 00:11:31.565211   39129 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1378,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:11:31.565283   39129 start.go:143] virtualization:  
	I1211 00:11:31.568710   39129 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:11:31.572525   39129 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:11:31.572647   39129 notify.go:221] Checking for updates...
	I1211 00:11:31.578309   39129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:11:31.581264   39129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:31.584071   39129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:11:31.586801   39129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:11:31.589632   39129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:11:31.593067   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:31.593203   39129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:11:31.624525   39129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:11:31.624640   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.680227   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.670392474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.680335   39129 docker.go:319] overlay module found
	I1211 00:11:31.683507   39129 out.go:179] * Using the docker driver based on existing profile
	I1211 00:11:31.686334   39129 start.go:309] selected driver: docker
	I1211 00:11:31.686351   39129 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.686457   39129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:11:31.686564   39129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:11:31.744265   39129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:11:31.73545255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:11:31.744665   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:31.744728   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:31.744781   39129 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:31.747938   39129 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:11:31.750895   39129 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:11:31.753857   39129 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:11:31.756592   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:31.756636   39129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:11:31.756650   39129 cache.go:65] Caching tarball of preloaded images
	I1211 00:11:31.756687   39129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:11:31.756736   39129 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:11:31.756746   39129 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:11:31.756847   39129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:11:31.775263   39129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:11:31.775283   39129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:11:31.775304   39129 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:11:31.775335   39129 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:11:31.775391   39129 start.go:364] duration metric: took 34.412µs to acquireMachinesLock for "functional-786978"
	I1211 00:11:31.775414   39129 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:11:31.775420   39129 fix.go:54] fixHost starting: 
	I1211 00:11:31.775679   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:31.791888   39129 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:11:31.791920   39129 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:11:31.795111   39129 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:11:31.795143   39129 machine.go:94] provisionDockerMachine start ...
	I1211 00:11:31.795229   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.811419   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.811754   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.811770   39129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:11:31.962366   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:31.962392   39129 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:11:31.962456   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:31.979928   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:31.980236   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:31.980251   39129 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:11:32.139976   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:11:32.140054   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.158886   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.159253   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.159279   39129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:11:32.307553   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:11:32.307588   39129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:11:32.307609   39129 ubuntu.go:190] setting up certificates
	I1211 00:11:32.307618   39129 provision.go:84] configureAuth start
	I1211 00:11:32.307677   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:32.326881   39129 provision.go:143] copyHostCerts
	I1211 00:11:32.326928   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.326981   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:11:32.326990   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:11:32.327094   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:11:32.327189   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327219   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:11:32.327229   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:11:32.327259   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:11:32.327306   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327328   39129 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:11:32.327337   39129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:11:32.327369   39129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:11:32.327438   39129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:11:32.651770   39129 provision.go:177] copyRemoteCerts
	I1211 00:11:32.651883   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:11:32.651966   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.672496   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:32.786699   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 00:11:32.786771   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:11:32.804288   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 00:11:32.804348   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:11:32.822111   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 00:11:32.822172   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 00:11:32.839310   39129 provision.go:87] duration metric: took 531.679958ms to configureAuth
	I1211 00:11:32.839337   39129 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:11:32.839540   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:32.839656   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:32.857209   39129 main.go:143] libmachine: Using SSH client type: native
	I1211 00:11:32.857554   39129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:11:32.857577   39129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:11:33.187304   39129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:11:33.187369   39129 machine.go:97] duration metric: took 1.392217167s to provisionDockerMachine
	I1211 00:11:33.187397   39129 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:11:33.187428   39129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:11:33.187507   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:11:33.187571   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.206116   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.310766   39129 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:11:33.313950   39129 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1211 00:11:33.313971   39129 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1211 00:11:33.313977   39129 command_runner.go:130] > VERSION_ID="12"
	I1211 00:11:33.313982   39129 command_runner.go:130] > VERSION="12 (bookworm)"
	I1211 00:11:33.313987   39129 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1211 00:11:33.313990   39129 command_runner.go:130] > ID=debian
	I1211 00:11:33.313995   39129 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1211 00:11:33.314000   39129 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1211 00:11:33.314006   39129 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1211 00:11:33.314074   39129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:11:33.314099   39129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:11:33.314110   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:11:33.314165   39129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:11:33.314254   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:11:33.314265   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /etc/ssl/certs/48752.pem
	I1211 00:11:33.314342   39129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:11:33.314349   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> /etc/test/nested/copy/4875/hosts
	I1211 00:11:33.314395   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:11:33.321833   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:33.338845   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:11:33.355788   39129 start.go:296] duration metric: took 168.358579ms for postStartSetup
	I1211 00:11:33.355933   39129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:11:33.355981   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.374136   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.483570   39129 command_runner.go:130] > 14%
	I1211 00:11:33.484133   39129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:11:33.488331   39129 command_runner.go:130] > 168G
	I1211 00:11:33.488874   39129 fix.go:56] duration metric: took 1.713448769s for fixHost
	I1211 00:11:33.488896   39129 start.go:83] releasing machines lock for "functional-786978", held for 1.713491657s
	I1211 00:11:33.488966   39129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:11:33.505970   39129 ssh_runner.go:195] Run: cat /version.json
	I1211 00:11:33.506004   39129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:11:33.506020   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.506067   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:33.524523   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.532688   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:33.712031   39129 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1211 00:11:33.714840   39129 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1211 00:11:33.715004   39129 ssh_runner.go:195] Run: systemctl --version
	I1211 00:11:33.720988   39129 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1211 00:11:33.721023   39129 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1211 00:11:33.721418   39129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:11:33.758142   39129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1211 00:11:33.762640   39129 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1211 00:11:33.762695   39129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:11:33.762759   39129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:11:33.770580   39129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:11:33.770605   39129 start.go:496] detecting cgroup driver to use...
	I1211 00:11:33.770636   39129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:11:33.770683   39129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:11:33.785751   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:11:33.798781   39129 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:11:33.798859   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:11:33.814594   39129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:11:33.828060   39129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:11:33.939426   39129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:11:34.063996   39129 docker.go:234] disabling docker service ...
	I1211 00:11:34.064079   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:11:34.088847   39129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:11:34.106427   39129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:11:34.233444   39129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:11:34.359250   39129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:11:34.371772   39129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:11:34.384768   39129 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1211 00:11:34.385910   39129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:11:34.386015   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.395329   39129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:11:34.395408   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.404378   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.412986   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.421585   39129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:11:34.429722   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.438361   39129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.447060   39129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.456153   39129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:11:34.462793   39129 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1211 00:11:34.463922   39129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:11:34.471096   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:34.576052   39129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:11:34.729272   39129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:11:34.729346   39129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:11:34.732930   39129 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1211 00:11:34.732954   39129 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1211 00:11:34.732962   39129 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1211 00:11:34.732969   39129 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:34.732973   39129 command_runner.go:130] > Access: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732985   39129 command_runner.go:130] > Modify: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732992   39129 command_runner.go:130] > Change: 2025-12-11 00:11:34.680037554 +0000
	I1211 00:11:34.732995   39129 command_runner.go:130] >  Birth: -
	I1211 00:11:34.733171   39129 start.go:564] Will wait 60s for crictl version
	I1211 00:11:34.733232   39129 ssh_runner.go:195] Run: which crictl
	I1211 00:11:34.736601   39129 command_runner.go:130] > /usr/local/bin/crictl
	I1211 00:11:34.736687   39129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:11:34.757793   39129 command_runner.go:130] > Version:  0.1.0
	I1211 00:11:34.757906   39129 command_runner.go:130] > RuntimeName:  cri-o
	I1211 00:11:34.757921   39129 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1211 00:11:34.757928   39129 command_runner.go:130] > RuntimeApiVersion:  v1
	I1211 00:11:34.760151   39129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:11:34.760230   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.787961   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.787986   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.787993   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.787998   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.788005   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.788009   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.788013   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.788019   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.788024   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.788028   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.788035   39129 command_runner.go:130] >      static
	I1211 00:11:34.788039   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.788043   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.788051   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.788055   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.788058   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.788069   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.788074   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.788080   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.788088   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.789644   39129 ssh_runner.go:195] Run: crio --version
	I1211 00:11:34.815359   39129 command_runner.go:130] > crio version 1.34.3
	I1211 00:11:34.815385   39129 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1211 00:11:34.815392   39129 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1211 00:11:34.815397   39129 command_runner.go:130] >    GitTreeState:   dirty
	I1211 00:11:34.815402   39129 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1211 00:11:34.815425   39129 command_runner.go:130] >    GoVersion:      go1.24.6
	I1211 00:11:34.815432   39129 command_runner.go:130] >    Compiler:       gc
	I1211 00:11:34.815439   39129 command_runner.go:130] >    Platform:       linux/arm64
	I1211 00:11:34.815448   39129 command_runner.go:130] >    Linkmode:       static
	I1211 00:11:34.815452   39129 command_runner.go:130] >    BuildTags:
	I1211 00:11:34.815456   39129 command_runner.go:130] >      static
	I1211 00:11:34.815460   39129 command_runner.go:130] >      netgo
	I1211 00:11:34.815468   39129 command_runner.go:130] >      osusergo
	I1211 00:11:34.815473   39129 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1211 00:11:34.815480   39129 command_runner.go:130] >      seccomp
	I1211 00:11:34.815484   39129 command_runner.go:130] >      apparmor
	I1211 00:11:34.815491   39129 command_runner.go:130] >      selinux
	I1211 00:11:34.815496   39129 command_runner.go:130] >    LDFlags:          unknown
	I1211 00:11:34.815505   39129 command_runner.go:130] >    SeccompEnabled:   true
	I1211 00:11:34.815512   39129 command_runner.go:130] >    AppArmorEnabled:  false
	I1211 00:11:34.822208   39129 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:11:34.825193   39129 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:11:34.839960   39129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:11:34.843868   39129 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1211 00:11:34.843970   39129 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:11:34.844072   39129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:11:34.844127   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.876890   39129 command_runner.go:130] > {
	I1211 00:11:34.876911   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.876915   39129 command_runner.go:130] >     {
	I1211 00:11:34.876923   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.876928   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.876934   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.876937   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876941   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.876951   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.876963   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.876967   39129 command_runner.go:130] >       ],
	I1211 00:11:34.876971   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.876979   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.876984   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.876987   39129 command_runner.go:130] >     },
	I1211 00:11:34.876991   39129 command_runner.go:130] >     {
	I1211 00:11:34.876997   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.877005   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877011   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.877014   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877018   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877026   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.877038   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.877042   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877046   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.877053   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877060   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877067   39129 command_runner.go:130] >     },
	I1211 00:11:34.877070   39129 command_runner.go:130] >     {
	I1211 00:11:34.877077   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.877089   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877094   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.877098   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877113   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877124   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.877132   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.877139   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877143   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.877147   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.877151   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877154   39129 command_runner.go:130] >     },
	I1211 00:11:34.877158   39129 command_runner.go:130] >     {
	I1211 00:11:34.877165   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.877171   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877176   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.877180   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877186   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877194   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.877204   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.877211   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877216   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.877219   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877224   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877234   39129 command_runner.go:130] >       },
	I1211 00:11:34.877242   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877253   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877257   39129 command_runner.go:130] >     },
	I1211 00:11:34.877260   39129 command_runner.go:130] >     {
	I1211 00:11:34.877267   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.877271   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877280   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.877287   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877291   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877299   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.877309   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.877317   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877326   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.877334   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877343   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877347   39129 command_runner.go:130] >       },
	I1211 00:11:34.877351   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877359   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877363   39129 command_runner.go:130] >     },
	I1211 00:11:34.877367   39129 command_runner.go:130] >     {
	I1211 00:11:34.877374   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.877381   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877387   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.877390   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877394   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877411   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.877420   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.877426   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877430   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.877434   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877438   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877441   39129 command_runner.go:130] >       },
	I1211 00:11:34.877445   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877450   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877455   39129 command_runner.go:130] >     },
	I1211 00:11:34.877459   39129 command_runner.go:130] >     {
	I1211 00:11:34.877473   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.877476   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.877490   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877494   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877502   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.877512   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.877516   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877520   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.877527   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877534   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877538   39129 command_runner.go:130] >     },
	I1211 00:11:34.877550   39129 command_runner.go:130] >     {
	I1211 00:11:34.877556   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.877560   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877565   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.877571   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877575   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877582   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.877602   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.877606   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877614   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.877618   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877630   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.877633   39129 command_runner.go:130] >       },
	I1211 00:11:34.877636   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877640   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.877646   39129 command_runner.go:130] >     },
	I1211 00:11:34.877649   39129 command_runner.go:130] >     {
	I1211 00:11:34.877656   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.877662   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.877667   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.877670   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877674   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.877681   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.877695   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.877699   39129 command_runner.go:130] >       ],
	I1211 00:11:34.877703   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.877707   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.877714   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.877717   39129 command_runner.go:130] >       },
	I1211 00:11:34.877721   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.877732   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.877738   39129 command_runner.go:130] >     }
	I1211 00:11:34.877741   39129 command_runner.go:130] >   ]
	I1211 00:11:34.877744   39129 command_runner.go:130] > }
	I1211 00:11:34.877906   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.877920   39129 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:11:34.877980   39129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:11:34.904837   39129 command_runner.go:130] > {
	I1211 00:11:34.904873   39129 command_runner.go:130] >   "images":  [
	I1211 00:11:34.904879   39129 command_runner.go:130] >     {
	I1211 00:11:34.904887   39129 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1211 00:11:34.904893   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904899   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1211 00:11:34.904903   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904925   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.904940   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1211 00:11:34.904949   39129 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1211 00:11:34.904958   39129 command_runner.go:130] >       ],
	I1211 00:11:34.904962   39129 command_runner.go:130] >       "size":  "111333938",
	I1211 00:11:34.904966   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.904971   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.904975   39129 command_runner.go:130] >     },
	I1211 00:11:34.904978   39129 command_runner.go:130] >     {
	I1211 00:11:34.904985   39129 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1211 00:11:34.904989   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.904999   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1211 00:11:34.905010   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905015   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905023   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1211 00:11:34.905032   39129 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1211 00:11:34.905038   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905042   39129 command_runner.go:130] >       "size":  "29037500",
	I1211 00:11:34.905046   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905054   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905064   39129 command_runner.go:130] >     },
	I1211 00:11:34.905068   39129 command_runner.go:130] >     {
	I1211 00:11:34.905075   39129 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1211 00:11:34.905079   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905084   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1211 00:11:34.905090   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905095   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905103   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1211 00:11:34.905113   39129 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1211 00:11:34.905121   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905126   39129 command_runner.go:130] >       "size":  "74491780",
	I1211 00:11:34.905130   39129 command_runner.go:130] >       "username":  "nonroot",
	I1211 00:11:34.905134   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905143   39129 command_runner.go:130] >     },
	I1211 00:11:34.905146   39129 command_runner.go:130] >     {
	I1211 00:11:34.905153   39129 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1211 00:11:34.905162   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905167   39129 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1211 00:11:34.905170   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905175   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905182   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1211 00:11:34.905192   39129 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1211 00:11:34.905195   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905199   39129 command_runner.go:130] >       "size":  "60857170",
	I1211 00:11:34.905209   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905217   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905228   39129 command_runner.go:130] >       },
	I1211 00:11:34.905237   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905244   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905248   39129 command_runner.go:130] >     },
	I1211 00:11:34.905251   39129 command_runner.go:130] >     {
	I1211 00:11:34.905258   39129 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1211 00:11:34.905262   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905267   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1211 00:11:34.905272   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905276   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905284   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1211 00:11:34.905295   39129 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1211 00:11:34.905302   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905306   39129 command_runner.go:130] >       "size":  "84949999",
	I1211 00:11:34.905310   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905315   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905322   39129 command_runner.go:130] >       },
	I1211 00:11:34.905326   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905330   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905334   39129 command_runner.go:130] >     },
	I1211 00:11:34.905337   39129 command_runner.go:130] >     {
	I1211 00:11:34.905351   39129 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1211 00:11:34.905355   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905361   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1211 00:11:34.905368   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905378   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905391   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1211 00:11:34.905400   39129 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1211 00:11:34.905408   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905413   39129 command_runner.go:130] >       "size":  "72170325",
	I1211 00:11:34.905417   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905424   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905431   39129 command_runner.go:130] >       },
	I1211 00:11:34.905435   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905439   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905441   39129 command_runner.go:130] >     },
	I1211 00:11:34.905444   39129 command_runner.go:130] >     {
	I1211 00:11:34.905451   39129 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1211 00:11:34.905457   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905463   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1211 00:11:34.905466   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905470   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905481   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1211 00:11:34.905492   39129 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1211 00:11:34.905496   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905500   39129 command_runner.go:130] >       "size":  "74106775",
	I1211 00:11:34.905509   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905513   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905516   39129 command_runner.go:130] >     },
	I1211 00:11:34.905519   39129 command_runner.go:130] >     {
	I1211 00:11:34.905526   39129 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1211 00:11:34.905535   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905541   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1211 00:11:34.905544   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905548   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905556   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1211 00:11:34.905573   39129 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1211 00:11:34.905577   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905581   39129 command_runner.go:130] >       "size":  "49822549",
	I1211 00:11:34.905585   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905589   39129 command_runner.go:130] >         "value":  "0"
	I1211 00:11:34.905592   39129 command_runner.go:130] >       },
	I1211 00:11:34.905596   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905604   39129 command_runner.go:130] >       "pinned":  false
	I1211 00:11:34.905612   39129 command_runner.go:130] >     },
	I1211 00:11:34.905619   39129 command_runner.go:130] >     {
	I1211 00:11:34.905625   39129 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1211 00:11:34.905629   39129 command_runner.go:130] >       "repoTags":  [
	I1211 00:11:34.905634   39129 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.905637   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905641   39129 command_runner.go:130] >       "repoDigests":  [
	I1211 00:11:34.905657   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1211 00:11:34.905665   39129 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1211 00:11:34.905671   39129 command_runner.go:130] >       ],
	I1211 00:11:34.905675   39129 command_runner.go:130] >       "size":  "519884",
	I1211 00:11:34.905679   39129 command_runner.go:130] >       "uid":  {
	I1211 00:11:34.905683   39129 command_runner.go:130] >         "value":  "65535"
	I1211 00:11:34.905686   39129 command_runner.go:130] >       },
	I1211 00:11:34.905690   39129 command_runner.go:130] >       "username":  "",
	I1211 00:11:34.905697   39129 command_runner.go:130] >       "pinned":  true
	I1211 00:11:34.905700   39129 command_runner.go:130] >     }
	I1211 00:11:34.905703   39129 command_runner.go:130] >   ]
	I1211 00:11:34.905705   39129 command_runner.go:130] > }
	I1211 00:11:34.908324   39129 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:11:34.908347   39129 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:11:34.908354   39129 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:11:34.908461   39129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:11:34.908543   39129 ssh_runner.go:195] Run: crio config
	I1211 00:11:34.971791   39129 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1211 00:11:34.971813   39129 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1211 00:11:34.971821   39129 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1211 00:11:34.971824   39129 command_runner.go:130] > #
	I1211 00:11:34.971832   39129 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1211 00:11:34.971839   39129 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1211 00:11:34.971846   39129 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1211 00:11:34.971853   39129 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1211 00:11:34.971857   39129 command_runner.go:130] > # reload'.
	I1211 00:11:34.971875   39129 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1211 00:11:34.971882   39129 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1211 00:11:34.971888   39129 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1211 00:11:34.971894   39129 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1211 00:11:34.971898   39129 command_runner.go:130] > [crio]
	I1211 00:11:34.971903   39129 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1211 00:11:34.971908   39129 command_runner.go:130] > # containers images, in this directory.
	I1211 00:11:34.972453   39129 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1211 00:11:34.972468   39129 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1211 00:11:34.973023   39129 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1211 00:11:34.973035   39129 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1211 00:11:34.973741   39129 command_runner.go:130] > # imagestore = ""
	I1211 00:11:34.973760   39129 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1211 00:11:34.973768   39129 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1211 00:11:34.973950   39129 command_runner.go:130] > # storage_driver = "overlay"
	I1211 00:11:34.973965   39129 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1211 00:11:34.973972   39129 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1211 00:11:34.974083   39129 command_runner.go:130] > # storage_option = [
	I1211 00:11:34.974240   39129 command_runner.go:130] > # ]
	I1211 00:11:34.974255   39129 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1211 00:11:34.974262   39129 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1211 00:11:34.974433   39129 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1211 00:11:34.974477   39129 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1211 00:11:34.974487   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1211 00:11:34.974492   39129 command_runner.go:130] > # always happen on a node reboot
	I1211 00:11:34.974707   39129 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1211 00:11:34.974755   39129 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1211 00:11:34.974769   39129 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1211 00:11:34.974774   39129 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1211 00:11:34.974951   39129 command_runner.go:130] > # version_file_persist = ""
	I1211 00:11:34.974999   39129 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1211 00:11:34.975014   39129 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1211 00:11:34.975286   39129 command_runner.go:130] > # internal_wipe = true
	I1211 00:11:34.975303   39129 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1211 00:11:34.975309   39129 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1211 00:11:34.975533   39129 command_runner.go:130] > # internal_repair = true
	I1211 00:11:34.975547   39129 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1211 00:11:34.975554   39129 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1211 00:11:34.975560   39129 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1211 00:11:34.975800   39129 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1211 00:11:34.975813   39129 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1211 00:11:34.975817   39129 command_runner.go:130] > [crio.api]
	I1211 00:11:34.975838   39129 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1211 00:11:34.976047   39129 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1211 00:11:34.976068   39129 command_runner.go:130] > # IP address on which the stream server will listen.
	I1211 00:11:34.976289   39129 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1211 00:11:34.976305   39129 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1211 00:11:34.976322   39129 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1211 00:11:34.976522   39129 command_runner.go:130] > # stream_port = "0"
	I1211 00:11:34.976537   39129 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1211 00:11:34.976743   39129 command_runner.go:130] > # stream_enable_tls = false
	I1211 00:11:34.976759   39129 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1211 00:11:34.976966   39129 command_runner.go:130] > # stream_idle_timeout = ""
	I1211 00:11:34.976981   39129 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1211 00:11:34.976987   39129 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977102   39129 command_runner.go:130] > # stream_tls_cert = ""
	I1211 00:11:34.977116   39129 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1211 00:11:34.977122   39129 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1211 00:11:34.977375   39129 command_runner.go:130] > # stream_tls_key = ""
	I1211 00:11:34.977408   39129 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1211 00:11:34.977433   39129 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1211 00:11:34.977440   39129 command_runner.go:130] > # automatically pick up the changes.
	I1211 00:11:34.977571   39129 command_runner.go:130] > # stream_tls_ca = ""
	I1211 00:11:34.977641   39129 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977779   39129 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1211 00:11:34.977797   39129 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1211 00:11:34.977991   39129 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1211 00:11:34.978007   39129 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1211 00:11:34.978040   39129 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1211 00:11:34.978056   39129 command_runner.go:130] > [crio.runtime]
	I1211 00:11:34.978069   39129 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1211 00:11:34.978076   39129 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1211 00:11:34.978080   39129 command_runner.go:130] > # "nofile=1024:2048"
	I1211 00:11:34.978086   39129 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1211 00:11:34.978208   39129 command_runner.go:130] > # default_ulimits = [
	I1211 00:11:34.978352   39129 command_runner.go:130] > # ]
	I1211 00:11:34.978369   39129 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1211 00:11:34.978551   39129 command_runner.go:130] > # no_pivot = false
	I1211 00:11:34.978566   39129 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1211 00:11:34.978572   39129 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1211 00:11:34.978723   39129 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1211 00:11:34.978739   39129 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1211 00:11:34.978744   39129 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1211 00:11:34.978775   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.978921   39129 command_runner.go:130] > # conmon = ""
	I1211 00:11:34.978933   39129 command_runner.go:130] > # Cgroup setting for conmon
	I1211 00:11:34.978941   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1211 00:11:34.979286   39129 command_runner.go:130] > conmon_cgroup = "pod"
	I1211 00:11:34.979301   39129 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1211 00:11:34.979307   39129 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1211 00:11:34.979343   39129 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1211 00:11:34.979348   39129 command_runner.go:130] > # conmon_env = [
	I1211 00:11:34.979496   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979512   39129 command_runner.go:130] > # Additional environment variables to set for all the
	I1211 00:11:34.979518   39129 command_runner.go:130] > # containers. These are overridden if set in the
	I1211 00:11:34.979524   39129 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1211 00:11:34.979552   39129 command_runner.go:130] > # default_env = [
	I1211 00:11:34.979707   39129 command_runner.go:130] > # ]
	I1211 00:11:34.979725   39129 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1211 00:11:34.979734   39129 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1211 00:11:34.979983   39129 command_runner.go:130] > # selinux = false
	I1211 00:11:34.980000   39129 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1211 00:11:34.980009   39129 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1211 00:11:34.980015   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980366   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.980414   39129 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1211 00:11:34.980429   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980434   39129 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1211 00:11:34.980447   39129 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1211 00:11:34.980453   39129 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1211 00:11:34.980464   39129 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1211 00:11:34.980471   39129 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1211 00:11:34.980493   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.980499   39129 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1211 00:11:34.980514   39129 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1211 00:11:34.980524   39129 command_runner.go:130] > # the cgroup blockio controller.
	I1211 00:11:34.980678   39129 command_runner.go:130] > # blockio_config_file = ""
	I1211 00:11:34.980713   39129 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1211 00:11:34.980723   39129 command_runner.go:130] > # blockio parameters.
	I1211 00:11:34.980981   39129 command_runner.go:130] > # blockio_reload = false
	I1211 00:11:34.980995   39129 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1211 00:11:34.980999   39129 command_runner.go:130] > # irqbalance daemon.
	I1211 00:11:34.981198   39129 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1211 00:11:34.981209   39129 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1211 00:11:34.981217   39129 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1211 00:11:34.981265   39129 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1211 00:11:34.981385   39129 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1211 00:11:34.981396   39129 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1211 00:11:34.981402   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.981515   39129 command_runner.go:130] > # rdt_config_file = ""
	I1211 00:11:34.981525   39129 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1211 00:11:34.981657   39129 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1211 00:11:34.981668   39129 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1211 00:11:34.981795   39129 command_runner.go:130] > # separate_pull_cgroup = ""
	I1211 00:11:34.981809   39129 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1211 00:11:34.981816   39129 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1211 00:11:34.981820   39129 command_runner.go:130] > # will be added.
	I1211 00:11:34.981926   39129 command_runner.go:130] > # default_capabilities = [
	I1211 00:11:34.982055   39129 command_runner.go:130] > # 	"CHOWN",
	I1211 00:11:34.982151   39129 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1211 00:11:34.982256   39129 command_runner.go:130] > # 	"FSETID",
	I1211 00:11:34.982350   39129 command_runner.go:130] > # 	"FOWNER",
	I1211 00:11:34.982451   39129 command_runner.go:130] > # 	"SETGID",
	I1211 00:11:34.982543   39129 command_runner.go:130] > # 	"SETUID",
	I1211 00:11:34.982687   39129 command_runner.go:130] > # 	"SETPCAP",
	I1211 00:11:34.982695   39129 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1211 00:11:34.982819   39129 command_runner.go:130] > # 	"KILL",
	I1211 00:11:34.982949   39129 command_runner.go:130] > # ]
	I1211 00:11:34.982960   39129 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1211 00:11:34.982993   39129 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1211 00:11:34.983107   39129 command_runner.go:130] > # add_inheritable_capabilities = false
	I1211 00:11:34.983118   39129 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1211 00:11:34.983132   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983136   39129 command_runner.go:130] > default_sysctls = [
	I1211 00:11:34.983272   39129 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1211 00:11:34.983279   39129 command_runner.go:130] > ]
	I1211 00:11:34.983285   39129 command_runner.go:130] > # List of devices on the host that a
	I1211 00:11:34.983300   39129 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1211 00:11:34.983304   39129 command_runner.go:130] > # allowed_devices = [
	I1211 00:11:34.983428   39129 command_runner.go:130] > # 	"/dev/fuse",
	I1211 00:11:34.983527   39129 command_runner.go:130] > # 	"/dev/net/tun",
	I1211 00:11:34.983650   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983660   39129 command_runner.go:130] > # List of additional devices. specified as
	I1211 00:11:34.983668   39129 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1211 00:11:34.983680   39129 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1211 00:11:34.983687   39129 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1211 00:11:34.983813   39129 command_runner.go:130] > # additional_devices = [
	I1211 00:11:34.983820   39129 command_runner.go:130] > # ]
	I1211 00:11:34.983826   39129 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1211 00:11:34.983923   39129 command_runner.go:130] > # cdi_spec_dirs = [
	I1211 00:11:34.984053   39129 command_runner.go:130] > # 	"/etc/cdi",
	I1211 00:11:34.984060   39129 command_runner.go:130] > # 	"/var/run/cdi",
	I1211 00:11:34.984160   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984177   39129 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1211 00:11:34.984184   39129 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1211 00:11:34.984195   39129 command_runner.go:130] > # Defaults to false.
	I1211 00:11:34.984334   39129 command_runner.go:130] > # device_ownership_from_security_context = false
	I1211 00:11:34.984345   39129 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1211 00:11:34.984355   39129 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1211 00:11:34.984488   39129 command_runner.go:130] > # hooks_dir = [
	I1211 00:11:34.984640   39129 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1211 00:11:34.984647   39129 command_runner.go:130] > # ]
	I1211 00:11:34.984653   39129 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1211 00:11:34.984667   39129 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1211 00:11:34.984672   39129 command_runner.go:130] > # its default mounts from the following two files:
	I1211 00:11:34.984675   39129 command_runner.go:130] > #
	I1211 00:11:34.984681   39129 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1211 00:11:34.984694   39129 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1211 00:11:34.984700   39129 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1211 00:11:34.984703   39129 command_runner.go:130] > #
	I1211 00:11:34.984710   39129 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1211 00:11:34.984716   39129 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1211 00:11:34.984722   39129 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1211 00:11:34.984727   39129 command_runner.go:130] > #      only add mounts it finds in this file.
	I1211 00:11:34.984729   39129 command_runner.go:130] > #
	I1211 00:11:34.984883   39129 command_runner.go:130] > # default_mounts_file = ""
	I1211 00:11:34.984900   39129 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1211 00:11:34.984908   39129 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1211 00:11:34.985051   39129 command_runner.go:130] > # pids_limit = -1
	I1211 00:11:34.985062   39129 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1211 00:11:34.985075   39129 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1211 00:11:34.985083   39129 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1211 00:11:34.985091   39129 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1211 00:11:34.985222   39129 command_runner.go:130] > # log_size_max = -1
	I1211 00:11:34.985233   39129 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1211 00:11:34.985372   39129 command_runner.go:130] > # log_to_journald = false
	I1211 00:11:34.985382   39129 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1211 00:11:34.985404   39129 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1211 00:11:34.985411   39129 command_runner.go:130] > # Path to directory for container attach sockets.
	I1211 00:11:34.985416   39129 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1211 00:11:34.985422   39129 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1211 00:11:34.985425   39129 command_runner.go:130] > # bind_mount_prefix = ""
	I1211 00:11:34.985434   39129 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1211 00:11:34.985569   39129 command_runner.go:130] > # read_only = false
	I1211 00:11:34.985580   39129 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1211 00:11:34.985587   39129 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1211 00:11:34.985601   39129 command_runner.go:130] > # live configuration reload.
	I1211 00:11:34.985605   39129 command_runner.go:130] > # log_level = "info"
	I1211 00:11:34.985611   39129 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1211 00:11:34.985616   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.985619   39129 command_runner.go:130] > # log_filter = ""
	I1211 00:11:34.985626   39129 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985632   39129 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1211 00:11:34.985635   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985643   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985647   39129 command_runner.go:130] > # uid_mappings = ""
	I1211 00:11:34.985654   39129 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1211 00:11:34.985660   39129 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1211 00:11:34.985664   39129 command_runner.go:130] > # separated by comma.
	I1211 00:11:34.985672   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985681   39129 command_runner.go:130] > # gid_mappings = ""
	I1211 00:11:34.985688   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1211 00:11:34.985694   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985700   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985708   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985712   39129 command_runner.go:130] > # minimum_mappable_uid = -1
	I1211 00:11:34.985718   39129 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1211 00:11:34.985723   39129 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1211 00:11:34.985729   39129 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1211 00:11:34.985737   39129 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1211 00:11:34.985741   39129 command_runner.go:130] > # minimum_mappable_gid = -1
	I1211 00:11:34.985747   39129 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1211 00:11:34.985753   39129 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1211 00:11:34.985759   39129 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1211 00:11:34.985975   39129 command_runner.go:130] > # ctr_stop_timeout = 30
	I1211 00:11:34.985988   39129 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1211 00:11:34.985994   39129 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1211 00:11:34.985999   39129 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1211 00:11:34.986004   39129 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1211 00:11:34.986008   39129 command_runner.go:130] > # drop_infra_ctr = true
	I1211 00:11:34.986014   39129 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1211 00:11:34.986019   39129 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1211 00:11:34.986029   39129 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1211 00:11:34.986033   39129 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1211 00:11:34.986040   39129 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1211 00:11:34.986046   39129 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1211 00:11:34.986051   39129 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1211 00:11:34.986057   39129 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1211 00:11:34.986060   39129 command_runner.go:130] > # shared_cpuset = ""
	I1211 00:11:34.986066   39129 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1211 00:11:34.986071   39129 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1211 00:11:34.986075   39129 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1211 00:11:34.986082   39129 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1211 00:11:34.986085   39129 command_runner.go:130] > # pinns_path = ""
	I1211 00:11:34.986091   39129 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1211 00:11:34.986098   39129 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1211 00:11:34.986101   39129 command_runner.go:130] > # enable_criu_support = true
	I1211 00:11:34.986107   39129 command_runner.go:130] > # Enable/disable the generation of the container,
	I1211 00:11:34.986112   39129 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1211 00:11:34.986116   39129 command_runner.go:130] > # enable_pod_events = false
	I1211 00:11:34.986122   39129 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1211 00:11:34.986131   39129 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1211 00:11:34.986135   39129 command_runner.go:130] > # default_runtime = "crun"
	I1211 00:11:34.986140   39129 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1211 00:11:34.986148   39129 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1211 00:11:34.986159   39129 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1211 00:11:34.986164   39129 command_runner.go:130] > # creation as a file is not desired either.
	I1211 00:11:34.986172   39129 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1211 00:11:34.986177   39129 command_runner.go:130] > # the hostname is being managed dynamically.
	I1211 00:11:34.986181   39129 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1211 00:11:34.986185   39129 command_runner.go:130] > # ]
	I1211 00:11:34.986192   39129 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1211 00:11:34.986198   39129 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1211 00:11:34.986205   39129 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1211 00:11:34.986210   39129 command_runner.go:130] > # Each entry in the table should follow the format:
	I1211 00:11:34.986212   39129 command_runner.go:130] > #
	I1211 00:11:34.986217   39129 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1211 00:11:34.986221   39129 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1211 00:11:34.986226   39129 command_runner.go:130] > # runtime_type = "oci"
	I1211 00:11:34.986231   39129 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1211 00:11:34.986235   39129 command_runner.go:130] > # inherit_default_runtime = false
	I1211 00:11:34.986240   39129 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1211 00:11:34.986244   39129 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1211 00:11:34.986248   39129 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1211 00:11:34.986251   39129 command_runner.go:130] > # monitor_env = []
	I1211 00:11:34.986256   39129 command_runner.go:130] > # privileged_without_host_devices = false
	I1211 00:11:34.986259   39129 command_runner.go:130] > # allowed_annotations = []
	I1211 00:11:34.986265   39129 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1211 00:11:34.986268   39129 command_runner.go:130] > # no_sync_log = false
	I1211 00:11:34.986272   39129 command_runner.go:130] > # default_annotations = {}
	I1211 00:11:34.986276   39129 command_runner.go:130] > # stream_websockets = false
	I1211 00:11:34.986279   39129 command_runner.go:130] > # seccomp_profile = ""
	I1211 00:11:34.986309   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.986315   39129 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1211 00:11:34.986324   39129 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1211 00:11:34.986330   39129 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1211 00:11:34.986337   39129 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1211 00:11:34.986340   39129 command_runner.go:130] > #   in $PATH.
	I1211 00:11:34.986346   39129 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1211 00:11:34.986350   39129 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1211 00:11:34.986356   39129 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1211 00:11:34.986359   39129 command_runner.go:130] > #   state.
	I1211 00:11:34.986366   39129 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1211 00:11:34.986375   39129 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1211 00:11:34.986381   39129 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1211 00:11:34.986387   39129 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1211 00:11:34.986392   39129 command_runner.go:130] > #   the values from the default runtime on load time.
	I1211 00:11:34.986398   39129 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1211 00:11:34.986404   39129 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1211 00:11:34.986410   39129 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1211 00:11:34.986417   39129 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1211 00:11:34.986421   39129 command_runner.go:130] > #   The currently recognized values are:
	I1211 00:11:34.986428   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1211 00:11:34.986435   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1211 00:11:34.986440   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1211 00:11:34.986446   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1211 00:11:34.986455   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1211 00:11:34.986462   39129 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1211 00:11:34.986469   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1211 00:11:34.986475   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1211 00:11:34.986481   39129 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1211 00:11:34.986487   39129 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1211 00:11:34.986494   39129 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1211 00:11:34.986500   39129 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1211 00:11:34.986505   39129 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1211 00:11:34.986511   39129 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1211 00:11:34.986517   39129 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1211 00:11:34.986528   39129 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1211 00:11:34.986534   39129 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1211 00:11:34.986538   39129 command_runner.go:130] > #   deprecated option "conmon".
	I1211 00:11:34.986545   39129 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1211 00:11:34.986550   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1211 00:11:34.986556   39129 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1211 00:11:34.986561   39129 command_runner.go:130] > #   should be moved to the container's cgroup
	I1211 00:11:34.986567   39129 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1211 00:11:34.986572   39129 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1211 00:11:34.986579   39129 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1211 00:11:34.986583   39129 command_runner.go:130] > #   conmon-rs by using:
	I1211 00:11:34.986591   39129 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1211 00:11:34.986598   39129 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1211 00:11:34.986606   39129 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1211 00:11:34.986613   39129 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1211 00:11:34.986618   39129 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1211 00:11:34.986625   39129 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1211 00:11:34.986633   39129 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1211 00:11:34.986641   39129 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1211 00:11:34.986651   39129 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1211 00:11:34.986658   39129 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1211 00:11:34.986662   39129 command_runner.go:130] > #   when a machine crash happens.
	I1211 00:11:34.986669   39129 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1211 00:11:34.986677   39129 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1211 00:11:34.986685   39129 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1211 00:11:34.986689   39129 command_runner.go:130] > #   seccomp profile for the runtime.
	I1211 00:11:34.986695   39129 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1211 00:11:34.986702   39129 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1211 00:11:34.986704   39129 command_runner.go:130] > #
	I1211 00:11:34.986708   39129 command_runner.go:130] > # Using the seccomp notifier feature:
	I1211 00:11:34.986711   39129 command_runner.go:130] > #
	I1211 00:11:34.986717   39129 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1211 00:11:34.986724   39129 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1211 00:11:34.986729   39129 command_runner.go:130] > #
	I1211 00:11:34.986739   39129 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1211 00:11:34.986745   39129 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1211 00:11:34.986748   39129 command_runner.go:130] > #
	I1211 00:11:34.986754   39129 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1211 00:11:34.986757   39129 command_runner.go:130] > # feature.
	I1211 00:11:34.986760   39129 command_runner.go:130] > #
	I1211 00:11:34.986766   39129 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1211 00:11:34.986772   39129 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1211 00:11:34.986778   39129 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1211 00:11:34.986784   39129 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1211 00:11:34.986790   39129 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1211 00:11:34.986792   39129 command_runner.go:130] > #
	I1211 00:11:34.986799   39129 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1211 00:11:34.986805   39129 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1211 00:11:34.986808   39129 command_runner.go:130] > #
	I1211 00:11:34.986814   39129 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1211 00:11:34.986820   39129 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1211 00:11:34.986822   39129 command_runner.go:130] > #
	I1211 00:11:34.986828   39129 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1211 00:11:34.986833   39129 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1211 00:11:34.986837   39129 command_runner.go:130] > # limitation.
	I1211 00:11:34.986842   39129 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1211 00:11:34.986846   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1211 00:11:34.986850   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986853   39129 command_runner.go:130] > runtime_root = "/run/crun"
	I1211 00:11:34.986857   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986860   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986864   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.986868   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.986872   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.986876   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.986880   39129 command_runner.go:130] > allowed_annotations = [
	I1211 00:11:34.986887   39129 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1211 00:11:34.986889   39129 command_runner.go:130] > ]
	I1211 00:11:34.986894   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.986898   39129 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1211 00:11:34.986902   39129 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1211 00:11:34.986906   39129 command_runner.go:130] > runtime_type = ""
	I1211 00:11:34.986909   39129 command_runner.go:130] > runtime_root = "/run/runc"
	I1211 00:11:34.986913   39129 command_runner.go:130] > inherit_default_runtime = false
	I1211 00:11:34.986917   39129 command_runner.go:130] > runtime_config_path = ""
	I1211 00:11:34.986921   39129 command_runner.go:130] > container_min_memory = ""
	I1211 00:11:34.987106   39129 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1211 00:11:34.987121   39129 command_runner.go:130] > monitor_cgroup = "pod"
	I1211 00:11:34.987127   39129 command_runner.go:130] > monitor_exec_cgroup = ""
	I1211 00:11:34.987132   39129 command_runner.go:130] > privileged_without_host_devices = false
	I1211 00:11:34.987139   39129 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1211 00:11:34.987147   39129 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1211 00:11:34.987154   39129 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1211 00:11:34.987166   39129 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1211 00:11:34.987177   39129 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1211 00:11:34.987187   39129 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1211 00:11:34.987194   39129 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1211 00:11:34.987200   39129 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1211 00:11:34.987209   39129 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1211 00:11:34.987218   39129 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1211 00:11:34.987224   39129 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1211 00:11:34.987231   39129 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1211 00:11:34.987235   39129 command_runner.go:130] > # Example:
	I1211 00:11:34.987241   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1211 00:11:34.987246   39129 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1211 00:11:34.987251   39129 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1211 00:11:34.987255   39129 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1211 00:11:34.987258   39129 command_runner.go:130] > # cpuset = "0-1"
	I1211 00:11:34.987262   39129 command_runner.go:130] > # cpushares = "5"
	I1211 00:11:34.987269   39129 command_runner.go:130] > # cpuquota = "1000"
	I1211 00:11:34.987273   39129 command_runner.go:130] > # cpuperiod = "100000"
	I1211 00:11:34.987277   39129 command_runner.go:130] > # cpulimit = "35"
	I1211 00:11:34.987280   39129 command_runner.go:130] > # Where:
	I1211 00:11:34.987284   39129 command_runner.go:130] > # The workload name is workload-type.
	I1211 00:11:34.987292   39129 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1211 00:11:34.987298   39129 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1211 00:11:34.987303   39129 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1211 00:11:34.987311   39129 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1211 00:11:34.987317   39129 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1211 00:11:34.987322   39129 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1211 00:11:34.987328   39129 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1211 00:11:34.987332   39129 command_runner.go:130] > # Default value is set to true
	I1211 00:11:34.987336   39129 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1211 00:11:34.987342   39129 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1211 00:11:34.987346   39129 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1211 00:11:34.987350   39129 command_runner.go:130] > # Default value is set to 'false'
	I1211 00:11:34.987355   39129 command_runner.go:130] > # disable_hostport_mapping = false
	I1211 00:11:34.987361   39129 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1211 00:11:34.987369   39129 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1211 00:11:34.987372   39129 command_runner.go:130] > # timezone = ""
	I1211 00:11:34.987379   39129 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1211 00:11:34.987382   39129 command_runner.go:130] > #
	I1211 00:11:34.987387   39129 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1211 00:11:34.987393   39129 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1211 00:11:34.987396   39129 command_runner.go:130] > [crio.image]
	I1211 00:11:34.987402   39129 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1211 00:11:34.987407   39129 command_runner.go:130] > # default_transport = "docker://"
	I1211 00:11:34.987413   39129 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1211 00:11:34.987419   39129 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987423   39129 command_runner.go:130] > # global_auth_file = ""
	I1211 00:11:34.987428   39129 command_runner.go:130] > # The image used to instantiate infra containers.
	I1211 00:11:34.987432   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987442   39129 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1211 00:11:34.987448   39129 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1211 00:11:34.987454   39129 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1211 00:11:34.987458   39129 command_runner.go:130] > # This option supports live configuration reload.
	I1211 00:11:34.987463   39129 command_runner.go:130] > # pause_image_auth_file = ""
	I1211 00:11:34.987468   39129 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1211 00:11:34.987478   39129 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1211 00:11:34.987484   39129 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1211 00:11:34.987489   39129 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1211 00:11:34.987505   39129 command_runner.go:130] > # pause_command = "/pause"
	I1211 00:11:34.987511   39129 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1211 00:11:34.987518   39129 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1211 00:11:34.987524   39129 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1211 00:11:34.987530   39129 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1211 00:11:34.987536   39129 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1211 00:11:34.987542   39129 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1211 00:11:34.987545   39129 command_runner.go:130] > # pinned_images = [
	I1211 00:11:34.987549   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987555   39129 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1211 00:11:34.987561   39129 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1211 00:11:34.987567   39129 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1211 00:11:34.987574   39129 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1211 00:11:34.987579   39129 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1211 00:11:34.987584   39129 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1211 00:11:34.987589   39129 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1211 00:11:34.987596   39129 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1211 00:11:34.987602   39129 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1211 00:11:34.987608   39129 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1211 00:11:34.987614   39129 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1211 00:11:34.987618   39129 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1211 00:11:34.987624   39129 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1211 00:11:34.987631   39129 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1211 00:11:34.987634   39129 command_runner.go:130] > # changing them here.
	I1211 00:11:34.987643   39129 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1211 00:11:34.987646   39129 command_runner.go:130] > # insecure_registries = [
	I1211 00:11:34.987651   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987657   39129 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1211 00:11:34.987662   39129 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1211 00:11:34.987666   39129 command_runner.go:130] > # image_volumes = "mkdir"
	I1211 00:11:34.987671   39129 command_runner.go:130] > # Temporary directory to use for storing big files
	I1211 00:11:34.987675   39129 command_runner.go:130] > # big_files_temporary_dir = ""
	I1211 00:11:34.987681   39129 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1211 00:11:34.987688   39129 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1211 00:11:34.987692   39129 command_runner.go:130] > # auto_reload_registries = false
	I1211 00:11:34.987698   39129 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1211 00:11:34.987706   39129 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1211 00:11:34.987711   39129 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1211 00:11:34.987715   39129 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1211 00:11:34.987719   39129 command_runner.go:130] > # The mode of short name resolution.
	I1211 00:11:34.987726   39129 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1211 00:11:34.987734   39129 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1211 00:11:34.987739   39129 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1211 00:11:34.987743   39129 command_runner.go:130] > # short_name_mode = "enforcing"
	I1211 00:11:34.987749   39129 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1211 00:11:34.987754   39129 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1211 00:11:34.987763   39129 command_runner.go:130] > # oci_artifact_mount_support = true
	I1211 00:11:34.987770   39129 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1211 00:11:34.987773   39129 command_runner.go:130] > # CNI plugins.
	I1211 00:11:34.987776   39129 command_runner.go:130] > [crio.network]
	I1211 00:11:34.987782   39129 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1211 00:11:34.987787   39129 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1211 00:11:34.987791   39129 command_runner.go:130] > # cni_default_network = ""
	I1211 00:11:34.987797   39129 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1211 00:11:34.987801   39129 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1211 00:11:34.987806   39129 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1211 00:11:34.987809   39129 command_runner.go:130] > # plugin_dirs = [
	I1211 00:11:34.987816   39129 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1211 00:11:34.987819   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987823   39129 command_runner.go:130] > # List of included pod metrics.
	I1211 00:11:34.987827   39129 command_runner.go:130] > # included_pod_metrics = [
	I1211 00:11:34.987830   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987837   39129 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1211 00:11:34.987840   39129 command_runner.go:130] > [crio.metrics]
	I1211 00:11:34.987845   39129 command_runner.go:130] > # Globally enable or disable metrics support.
	I1211 00:11:34.987849   39129 command_runner.go:130] > # enable_metrics = false
	I1211 00:11:34.987853   39129 command_runner.go:130] > # Specify enabled metrics collectors.
	I1211 00:11:34.987859   39129 command_runner.go:130] > # Per default all metrics are enabled.
	I1211 00:11:34.987865   39129 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1211 00:11:34.987871   39129 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1211 00:11:34.987877   39129 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1211 00:11:34.987880   39129 command_runner.go:130] > # metrics_collectors = [
	I1211 00:11:34.987884   39129 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1211 00:11:34.987888   39129 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1211 00:11:34.987892   39129 command_runner.go:130] > # 	"containers_oom_total",
	I1211 00:11:34.987895   39129 command_runner.go:130] > # 	"processes_defunct",
	I1211 00:11:34.987900   39129 command_runner.go:130] > # 	"operations_total",
	I1211 00:11:34.987904   39129 command_runner.go:130] > # 	"operations_latency_seconds",
	I1211 00:11:34.987908   39129 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1211 00:11:34.987912   39129 command_runner.go:130] > # 	"operations_errors_total",
	I1211 00:11:34.987916   39129 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1211 00:11:34.987920   39129 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1211 00:11:34.987924   39129 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1211 00:11:34.987928   39129 command_runner.go:130] > # 	"image_pulls_success_total",
	I1211 00:11:34.987932   39129 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1211 00:11:34.987936   39129 command_runner.go:130] > # 	"containers_oom_count_total",
	I1211 00:11:34.987942   39129 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1211 00:11:34.987946   39129 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1211 00:11:34.987950   39129 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1211 00:11:34.987953   39129 command_runner.go:130] > # ]
	I1211 00:11:34.987962   39129 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1211 00:11:34.987967   39129 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1211 00:11:34.987972   39129 command_runner.go:130] > # The port on which the metrics server will listen.
	I1211 00:11:34.987975   39129 command_runner.go:130] > # metrics_port = 9090
	I1211 00:11:34.987980   39129 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1211 00:11:34.987984   39129 command_runner.go:130] > # metrics_socket = ""
	I1211 00:11:34.987989   39129 command_runner.go:130] > # The certificate for the secure metrics server.
	I1211 00:11:34.987994   39129 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1211 00:11:34.988001   39129 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1211 00:11:34.988005   39129 command_runner.go:130] > # certificate on any modification event.
	I1211 00:11:34.988008   39129 command_runner.go:130] > # metrics_cert = ""
	I1211 00:11:34.988013   39129 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1211 00:11:34.988018   39129 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1211 00:11:34.988021   39129 command_runner.go:130] > # metrics_key = ""
	I1211 00:11:34.988026   39129 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1211 00:11:34.988030   39129 command_runner.go:130] > [crio.tracing]
	I1211 00:11:34.988035   39129 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1211 00:11:34.988038   39129 command_runner.go:130] > # enable_tracing = false
	I1211 00:11:34.988044   39129 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1211 00:11:34.988050   39129 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1211 00:11:34.988056   39129 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1211 00:11:34.988061   39129 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1211 00:11:34.988064   39129 command_runner.go:130] > # CRI-O NRI configuration.
	I1211 00:11:34.988067   39129 command_runner.go:130] > [crio.nri]
	I1211 00:11:34.988071   39129 command_runner.go:130] > # Globally enable or disable NRI.
	I1211 00:11:34.988075   39129 command_runner.go:130] > # enable_nri = true
	I1211 00:11:34.988079   39129 command_runner.go:130] > # NRI socket to listen on.
	I1211 00:11:34.988083   39129 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1211 00:11:34.988087   39129 command_runner.go:130] > # NRI plugin directory to use.
	I1211 00:11:34.988091   39129 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1211 00:11:34.988095   39129 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1211 00:11:34.988100   39129 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1211 00:11:34.988108   39129 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1211 00:11:34.988171   39129 command_runner.go:130] > # nri_disable_connections = false
	I1211 00:11:34.988177   39129 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1211 00:11:34.988182   39129 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1211 00:11:34.988186   39129 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1211 00:11:34.988190   39129 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1211 00:11:34.988194   39129 command_runner.go:130] > # NRI default validator configuration.
	I1211 00:11:34.988201   39129 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1211 00:11:34.988207   39129 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1211 00:11:34.988211   39129 command_runner.go:130] > # can be restricted/rejected:
	I1211 00:11:34.988215   39129 command_runner.go:130] > # - OCI hook injection
	I1211 00:11:34.988220   39129 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1211 00:11:34.988225   39129 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1211 00:11:34.988229   39129 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1211 00:11:34.988233   39129 command_runner.go:130] > # - adjustment of linux namespaces
	I1211 00:11:34.988240   39129 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1211 00:11:34.988246   39129 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1211 00:11:34.988251   39129 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1211 00:11:34.988254   39129 command_runner.go:130] > #
	I1211 00:11:34.988258   39129 command_runner.go:130] > # [crio.nri.default_validator]
	I1211 00:11:34.988262   39129 command_runner.go:130] > # nri_enable_default_validator = false
	I1211 00:11:34.988267   39129 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1211 00:11:34.988272   39129 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1211 00:11:34.988277   39129 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1211 00:11:34.988282   39129 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1211 00:11:34.988287   39129 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1211 00:11:34.988291   39129 command_runner.go:130] > # nri_validator_required_plugins = [
	I1211 00:11:34.988294   39129 command_runner.go:130] > # ]
	I1211 00:11:34.988299   39129 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1211 00:11:34.988306   39129 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1211 00:11:34.988309   39129 command_runner.go:130] > [crio.stats]
	I1211 00:11:34.988316   39129 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1211 00:11:34.988321   39129 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1211 00:11:34.988324   39129 command_runner.go:130] > # stats_collection_period = 0
	I1211 00:11:34.988334   39129 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1211 00:11:34.988341   39129 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1211 00:11:34.988345   39129 command_runner.go:130] > # collection_period = 0
	I1211 00:11:34.988741   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943588402Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1211 00:11:34.988759   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.943910852Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1211 00:11:34.988775   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944105801Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1211 00:11:34.988788   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944281599Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1211 00:11:34.988804   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944534263Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:11:34.988813   39129 command_runner.go:130] ! time="2025-12-11T00:11:34.944919976Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1211 00:11:34.988827   39129 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1211 00:11:34.988906   39129 cni.go:84] Creating CNI manager for ""
	I1211 00:11:34.988923   39129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:11:34.988942   39129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:11:34.988966   39129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:11:34.989098   39129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:11:34.989171   39129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:11:34.996103   39129 command_runner.go:130] > kubeadm
	I1211 00:11:34.996124   39129 command_runner.go:130] > kubectl
	I1211 00:11:34.996130   39129 command_runner.go:130] > kubelet
	I1211 00:11:34.996965   39129 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:11:34.997027   39129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:11:35.004524   39129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:11:35.022259   39129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:11:35.035877   39129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1211 00:11:35.049665   39129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:11:35.053270   39129 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1211 00:11:35.053410   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:35.173051   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:35.663593   39129 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:11:35.663611   39129 certs.go:195] generating shared ca certs ...
	I1211 00:11:35.663626   39129 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:35.663843   39129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:11:35.663918   39129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:11:35.664081   39129 certs.go:257] generating profile certs ...
	I1211 00:11:35.664282   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:11:35.664361   39129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:11:35.664489   39129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:11:35.664502   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 00:11:35.664555   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 00:11:35.664574   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 00:11:35.664591   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 00:11:35.664636   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 00:11:35.664653   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 00:11:35.664664   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 00:11:35.664675   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 00:11:35.664773   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:11:35.664811   39129 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:11:35.664825   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:11:35.664885   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:11:35.664944   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:11:35.664975   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:11:35.665087   39129 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:11:35.665126   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem -> /usr/share/ca-certificates/4875.pem
	I1211 00:11:35.665138   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.665177   39129 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.666144   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:11:35.692413   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:11:35.716263   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:11:35.735120   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:11:35.753386   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:11:35.771269   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:11:35.789331   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:11:35.806153   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:11:35.823663   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:11:35.840043   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:11:35.857281   39129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:11:35.874656   39129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:11:35.887595   39129 ssh_runner.go:195] Run: openssl version
	I1211 00:11:35.893373   39129 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1211 00:11:35.893766   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.901331   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:11:35.908770   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912293   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912332   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.912381   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:11:35.953295   39129 command_runner.go:130] > 3ec20f2e
	I1211 00:11:35.953382   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:11:35.960497   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.967487   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:11:35.974778   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978822   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978856   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:35.978928   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:11:36.019575   39129 command_runner.go:130] > b5213941
	I1211 00:11:36.020060   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:11:36.028538   39129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.036748   39129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:11:36.045277   39129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049492   39129 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049553   39129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.049672   39129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:11:36.092814   39129 command_runner.go:130] > 51391683
	I1211 00:11:36.093356   39129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:11:36.101223   39129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105165   39129 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:11:36.105191   39129 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1211 00:11:36.105198   39129 command_runner.go:130] > Device: 259,1	Inode: 1312480     Links: 1
	I1211 00:11:36.105205   39129 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1211 00:11:36.105212   39129 command_runner.go:130] > Access: 2025-12-11 00:07:28.485872476 +0000
	I1211 00:11:36.105217   39129 command_runner.go:130] > Modify: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105222   39129 command_runner.go:130] > Change: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105228   39129 command_runner.go:130] >  Birth: 2025-12-11 00:03:24.590537280 +0000
	I1211 00:11:36.105288   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:11:36.146158   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.146663   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:11:36.187479   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.187576   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:11:36.228130   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.228568   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:11:36.269072   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.269532   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:11:36.310317   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.310832   39129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:11:36.353606   39129 command_runner.go:130] > Certificate will not expire
	I1211 00:11:36.354067   39129 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:11:36.354163   39129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:11:36.354246   39129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:11:36.382480   39129 cri.go:89] found id: ""
	I1211 00:11:36.382557   39129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:11:36.389756   39129 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1211 00:11:36.389777   39129 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1211 00:11:36.389784   39129 command_runner.go:130] > /var/lib/minikube/etcd:
	I1211 00:11:36.390708   39129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:11:36.390737   39129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:11:36.390806   39129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:11:36.398342   39129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:11:36.398732   39129 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-786978" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.398833   39129 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-2739/kubeconfig needs updating (will repair): [kubeconfig missing "functional-786978" cluster setting kubeconfig missing "functional-786978" context setting]
	I1211 00:11:36.399137   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.399560   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.399714   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.400253   39129 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 00:11:36.400273   39129 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 00:11:36.400281   39129 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 00:11:36.400286   39129 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 00:11:36.400291   39129 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 00:11:36.400594   39129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:11:36.400697   39129 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1211 00:11:36.409983   39129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1211 00:11:36.410015   39129 kubeadm.go:602] duration metric: took 19.271635ms to restartPrimaryControlPlane
	I1211 00:11:36.410025   39129 kubeadm.go:403] duration metric: took 55.966406ms to StartCluster
	I1211 00:11:36.410041   39129 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410105   39129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.410754   39129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:11:36.410951   39129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 00:11:36.411375   39129 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:11:36.411428   39129 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 00:11:36.411496   39129 addons.go:70] Setting storage-provisioner=true in profile "functional-786978"
	I1211 00:11:36.411509   39129 addons.go:239] Setting addon storage-provisioner=true in "functional-786978"
	I1211 00:11:36.411539   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.412103   39129 addons.go:70] Setting default-storageclass=true in profile "functional-786978"
	I1211 00:11:36.412128   39129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-786978"
	I1211 00:11:36.412372   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.412555   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.416027   39129 out.go:179] * Verifying Kubernetes components...
	I1211 00:11:36.418962   39129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:11:36.445616   39129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 00:11:36.448584   39129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.448615   39129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 00:11:36.448687   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.455632   39129 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:11:36.455806   39129 kapi.go:59] client config for functional-786978: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 00:11:36.456398   39129 addons.go:239] Setting addon default-storageclass=true in "functional-786978"
	I1211 00:11:36.456432   39129 host.go:66] Checking if "functional-786978" exists ...
	I1211 00:11:36.459345   39129 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:11:36.488078   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.511255   39129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:36.511282   39129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 00:11:36.511350   39129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:11:36.540894   39129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:11:36.608214   39129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:11:36.665748   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:36.679982   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.404051   39129 node_ready.go:35] waiting up to 6m0s for node "functional-786978" to be "Ready" ...
	I1211 00:11:37.404239   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.404634   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404742   39129 retry.go:31] will retry after 310.125043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404824   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.404858   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404893   39129 retry.go:31] will retry after 141.721995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.404991   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:37.547464   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:37.613487   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.613562   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.613592   39129 retry.go:31] will retry after 561.758211ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.715754   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:37.779510   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:37.779557   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.779585   39129 retry.go:31] will retry after 505.869102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:37.904810   39129 type.go:168] "Request Body" body=""
	I1211 00:11:37.904884   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:37.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.175539   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.243137   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.243185   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.243204   39129 retry.go:31] will retry after 361.539254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.286533   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:38.344606   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.348111   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.348157   39129 retry.go:31] will retry after 829.218438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.404431   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.404511   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.404881   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:38.605429   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:38.661283   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:38.664833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.664864   39129 retry.go:31] will retry after 800.266997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:38.905185   39129 type.go:168] "Request Body" body=""
	I1211 00:11:38.905301   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:38.905646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:39.178251   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:39.238429   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.238472   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.238493   39129 retry.go:31] will retry after 1.184749907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.405001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.405348   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:39.405424   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:39.465581   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:39.526474   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:39.526525   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.526544   39129 retry.go:31] will retry after 1.807004704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:39.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:11:39.905105   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:39.905423   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.405603   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:40.423936   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:40.495739   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:40.495794   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.495811   39129 retry.go:31] will retry after 1.404783651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:40.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:11:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:40.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.334388   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:41.396786   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.396852   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.396891   39129 retry.go:31] will retry after 1.10995967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.405068   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.405184   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.405534   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:41.405602   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:41.901437   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:41.905007   39129 type.go:168] "Request Body" body=""
	I1211 00:11:41.905077   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:41.905313   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:41.984043   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:41.984104   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:41.984123   39129 retry.go:31] will retry after 1.551735429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.404784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:42.507069   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:42.562010   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:42.565655   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.565695   39129 retry.go:31] will retry after 1.834850552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:42.904273   39129 type.go:168] "Request Body" body=""
	I1211 00:11:42.904413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:42.904767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.404422   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:43.536095   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:43.596578   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:43.596618   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.596641   39129 retry.go:31] will retry after 3.759083682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:43.905026   39129 type.go:168] "Request Body" body=""
	I1211 00:11:43.905109   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:43.905424   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:43.905474   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:44.401015   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:44.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.404608   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:44.466004   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:44.470131   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.470162   39129 retry.go:31] will retry after 3.734519465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:44.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:11:44.904450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:44.904746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.404448   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.404610   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.405391   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:45.905314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:45.905389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:45.905730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:45.905817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:46.404489   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.404597   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.404850   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:46.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:11:46.904888   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:46.905184   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.356864   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:47.404412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:47.420245   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:47.420295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.420315   39129 retry.go:31] will retry after 2.851566945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:47.904846   39129 type.go:168] "Request Body" body=""
	I1211 00:11:47.904912   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:47.905167   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:48.205865   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:48.269575   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:48.269614   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.269633   39129 retry.go:31] will retry after 3.250947796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:48.404858   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.404932   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.405259   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:48.405314   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:48.905121   39129 type.go:168] "Request Body" body=""
	I1211 00:11:48.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:48.905582   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.404258   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.404342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:49.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:11:49.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:49.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.272194   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:50.327238   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:50.331229   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.331261   39129 retry.go:31] will retry after 4.377849152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:50.404603   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.404681   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.404972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:50.904412   39129 type.go:168] "Request Body" body=""
	I1211 00:11:50.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:50.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:50.904763   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:51.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.404469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:51.521211   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:11:51.575865   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:51.579753   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.579788   39129 retry.go:31] will retry after 10.380601314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:51.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:11:51.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:51.905566   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.405257   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.405613   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:52.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:11:52.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:52.904681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:53.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:53.404852   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:53.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:11:53.904440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:53.904804   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.404471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.404754   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:54.709241   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:11:54.767641   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:11:54.771055   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.771086   39129 retry.go:31] will retry after 5.957769887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:11:54.904303   39129 type.go:168] "Request Body" body=""
	I1211 00:11:54.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:54.904730   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.404312   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.404383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.404693   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:55.904394   39129 type.go:168] "Request Body" body=""
	I1211 00:11:55.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:55.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:55.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:56.404616   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.404692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.405015   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:56.904919   39129 type.go:168] "Request Body" body=""
	I1211 00:11:56.904989   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:56.905263   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.405131   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:11:57.905419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:57.905761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:11:57.905821   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:11:58.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.404407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.404667   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:58.904372   39129 type.go:168] "Request Body" body=""
	I1211 00:11:58.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:58.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.404718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:11:59.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:11:59.904404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:11:59.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:00.404425   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.404531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.404943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:00.405022   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:00.729113   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:00.791242   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:00.794799   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.794830   39129 retry.go:31] will retry after 11.484844112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:00.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:00.905270   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:00.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.405214   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.405547   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.904696   39129 type.go:168] "Request Body" body=""
	I1211 00:12:01.904770   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:01.905114   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:01.961328   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:02.020749   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:02.024939   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.024971   39129 retry.go:31] will retry after 14.651232328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:02.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:02.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:12:02.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:02.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:02.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:03.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:03.904466   39129 type.go:168] "Request Body" body=""
	I1211 00:12:03.904548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:03.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.404457   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.404546   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:04.904381   39129 type.go:168] "Request Body" body=""
	I1211 00:12:04.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:04.904772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:04.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:05.404564   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.404650   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.405040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:05.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:12:05.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:05.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.404608   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.404684   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.405046   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:06.905071   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:06.905390   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:06.905442   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:07.405193   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.405265   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.405584   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:07.904280   39129 type.go:168] "Request Body" body=""
	I1211 00:12:07.904352   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:07.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.404398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:08.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:12:08.904498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:08.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:09.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.404791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:09.404848   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:09.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:09.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:09.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.404523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:10.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:12:10.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:10.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:11.904428   39129 type.go:168] "Request Body" body=""
	I1211 00:12:11.904505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:11.904831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:11.904892   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:12.280537   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:12.342793   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:12.342833   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.342853   39129 retry.go:31] will retry after 23.205348466s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:12.405205   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.405280   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.405602   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:12.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:12.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:12.904717   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.404271   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:13.905297   39129 type.go:168] "Request Body" body=""
	I1211 00:12:13.905373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:13.905750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:13.905805   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:14.404327   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:14.904352   39129 type.go:168] "Request Body" body=""
	I1211 00:12:14.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:14.904734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:15.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:15.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:15.904784   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:16.404614   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.404686   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.405057   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:16.405114   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:16.676815   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:16.732715   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:16.736183   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.736213   39129 retry.go:31] will retry after 30.816141509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:16.904382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:16.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:16.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.404776   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:17.904286   39129 type.go:168] "Request Body" body=""
	I1211 00:12:17.904361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:17.904615   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.404395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:18.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:12:18.904448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:18.904755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:18.904810   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:19.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.404533   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:19.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:12:19.904394   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:19.904694   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:20.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:12:20.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:20.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:21.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.404473   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:21.404887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:21.904789   39129 type.go:168] "Request Body" body=""
	I1211 00:12:21.904874   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:21.905204   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.405273   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:22.905073   39129 type.go:168] "Request Body" body=""
	I1211 00:12:22.905146   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:22.905464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:23.405279   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.405347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.405687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:23.405741   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:23.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:23.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:23.904659   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.404824   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:24.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:12:24.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:24.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.404296   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:25.904391   39129 type.go:168] "Request Body" body=""
	I1211 00:12:25.904463   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:25.904801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:25.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:26.404631   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.404718   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.405047   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:26.904918   39129 type.go:168] "Request Body" body=""
	I1211 00:12:26.904987   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:26.905309   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.405154   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.405588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:27.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:12:27.904400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:27.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:28.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.404689   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:28.404748   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:28.904331   39129 type.go:168] "Request Body" body=""
	I1211 00:12:28.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:28.904750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.404573   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.404959   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:29.904646   39129 type.go:168] "Request Body" body=""
	I1211 00:12:29.904725   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:29.905092   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:30.404773   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.404846   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.405165   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:30.405221   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:30.904956   39129 type.go:168] "Request Body" body=""
	I1211 00:12:30.905034   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:30.905377   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.405001   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.405072   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.405325   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:31.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:12:31.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:31.905650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.404342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:32.904301   39129 type.go:168] "Request Body" body=""
	I1211 00:12:32.904387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:32.904648   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:32.904697   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:33.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.404470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.404825   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:33.904520   39129 type.go:168] "Request Body" body=""
	I1211 00:12:33.904591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:33.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.404711   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:34.904339   39129 type.go:168] "Request Body" body=""
	I1211 00:12:34.904412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:34.904742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:34.904798   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:35.404390   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:35.549321   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:35.607106   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:35.610743   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.610780   39129 retry.go:31] will retry after 16.241459848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:35.905109   39129 type.go:168] "Request Body" body=""
	I1211 00:12:35.905200   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:35.905468   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.404514   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:36.904809   39129 type.go:168] "Request Body" body=""
	I1211 00:12:36.904881   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:36.905210   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:36.905281   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:37.404334   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:37.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:12:37.904509   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:37.904813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.404408   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.404481   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:38.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:38.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:38.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:39.404416   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.404510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.404857   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:39.404920   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:39.904654   39129 type.go:168] "Request Body" body=""
	I1211 00:12:39.904746   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:39.905070   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.404756   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.404825   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:40.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:12:40.905026   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:40.905372   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:41.405159   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.405236   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.405596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:41.405654   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:41.904342   39129 type.go:168] "Request Body" body=""
	I1211 00:12:41.904410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:41.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.404773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:42.904495   39129 type.go:168] "Request Body" body=""
	I1211 00:12:42.904570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:42.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.404638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:43.904337   39129 type.go:168] "Request Body" body=""
	I1211 00:12:43.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:43.904731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:43.904791   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:44.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:44.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:12:44.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:44.904643   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.405043   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.405120   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:45.905241   39129 type.go:168] "Request Body" body=""
	I1211 00:12:45.905313   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:45.905665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:45.905721   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:46.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.404665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:46.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:12:46.904443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:46.904803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.404531   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.404614   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.404913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:47.553376   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:12:47.607763   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:47.611288   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.611317   39129 retry.go:31] will retry after 35.21019071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1211 00:12:47.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:12:47.904951   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:47.905249   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:48.405085   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.405161   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.405471   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:48.405525   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:48.905284   39129 type.go:168] "Request Body" body=""
	I1211 00:12:48.905364   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:48.905681   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.405295   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.405377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.405636   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:49.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:49.904436   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:49.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.404362   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.404447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:50.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:12:50.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:50.904691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:50.904742   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:51.404407   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.404485   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.404838   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.852477   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 00:12:51.904839   39129 type.go:168] "Request Body" body=""
	I1211 00:12:51.904910   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:51.905174   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:51.907207   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910687   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:12:51.910785   39129 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:12:52.404276   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:52.904314   39129 type.go:168] "Request Body" body=""
	I1211 00:12:52.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:52.904765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:52.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:53.404522   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.404945   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:53.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:12:53.904430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:53.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.404439   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:54.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:12:54.904458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:54.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:55.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.404347   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.404671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:55.404733   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:55.904357   39129 type.go:168] "Request Body" body=""
	I1211 00:12:55.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:55.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.404550   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.404631   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.404976   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:56.904790   39129 type.go:168] "Request Body" body=""
	I1211 00:12:56.904860   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:56.905139   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:57.404944   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.405013   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.405350   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:57.405406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:12:57.905194   39129 type.go:168] "Request Body" body=""
	I1211 00:12:57.905273   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:57.905640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.405189   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.405260   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.405511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:58.905275   39129 type.go:168] "Request Body" body=""
	I1211 00:12:58.905353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:58.905724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.404712   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:12:59.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:12:59.904425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:12:59.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:12:59.904732   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:00.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.404486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:00.904965   39129 type.go:168] "Request Body" body=""
	I1211 00:13:00.905043   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:00.905388   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.405097   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.405176   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.405439   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:01.904725   39129 type.go:168] "Request Body" body=""
	I1211 00:13:01.904806   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:01.905152   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:01.905207   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:02.404978   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.405084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.405396   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:02.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:13:02.905264   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:02.905532   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.405309   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.405405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.405763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:03.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:13:03.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:03.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:04.404467   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.404555   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:04.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:04.904358   39129 type.go:168] "Request Body" body=""
	I1211 00:13:04.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:04.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:05.904484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:05.904554   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:05.904870   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:06.404545   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.404613   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.404937   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:06.404991   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:06.904732   39129 type.go:168] "Request Body" body=""
	I1211 00:13:06.904814   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:06.905130   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.404806   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.404877   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.405129   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:07.904906   39129 type.go:168] "Request Body" body=""
	I1211 00:13:07.904976   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:07.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:08.405133   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.405212   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.405523   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:08.405575   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:08.905290   39129 type.go:168] "Request Body" body=""
	I1211 00:13:08.905357   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:08.905610   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.404766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:09.904501   39129 type.go:168] "Request Body" body=""
	I1211 00:13:09.904588   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:09.904943   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.404293   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.404362   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.404651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:10.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:13:10.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:10.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:10.904861   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:11.404508   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.404642   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:11.904763   39129 type.go:168] "Request Body" body=""
	I1211 00:13:11.904841   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:11.905096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.404345   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:12.904307   39129 type.go:168] "Request Body" body=""
	I1211 00:13:12.904388   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:12.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:13.404447   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:13.404835   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:13.904421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:13.904745   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.404439   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:14.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:13:14.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:14.904637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.404367   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:15.904488   39129 type.go:168] "Request Body" body=""
	I1211 00:13:15.904581   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:15.904884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:15.904954   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:16.404512   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.404576   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.404846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:16.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:16.904870   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:16.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.404863   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.404963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.405289   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:17.905011   39129 type.go:168] "Request Body" body=""
	I1211 00:13:17.905075   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:17.905318   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:17.905356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:18.405098   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.405169   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.405467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:18.905238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:18.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:18.905637   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.404323   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:19.904449   39129 type.go:168] "Request Body" body=""
	I1211 00:13:19.904524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:19.904900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:20.404601   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.405009   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:20.405059   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:20.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:13:20.904383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:20.904630   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.404435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:21.904577   39129 type.go:168] "Request Body" body=""
	I1211 00:13:21.904658   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:21.905033   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.404711   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.404786   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.405042   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:22.821681   39129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1211 00:13:22.876683   39129 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880295   39129 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1211 00:13:22.880396   39129 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1211 00:13:22.883693   39129 out.go:179] * Enabled addons: 
	I1211 00:13:22.887530   39129 addons.go:530] duration metric: took 1m46.476102717s for enable addons: enabled=[]
	I1211 00:13:22.904608   39129 type.go:168] "Request Body" body=""
	I1211 00:13:22.904678   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:22.904957   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:22.905000   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:23.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.404775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:23.904320   39129 type.go:168] "Request Body" body=""
	I1211 00:13:23.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:23.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.404395   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:24.904476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:24.904551   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:24.904854   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:25.404225   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.404302   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.404557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:25.404605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:25.905344   39129 type.go:168] "Request Body" body=""
	I1211 00:13:25.905433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:25.905756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.404719   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.405097   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:26.904886   39129 type.go:168] "Request Body" body=""
	I1211 00:13:26.904949   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:26.905202   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:27.404947   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.405328   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:27.405384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:27.905093   39129 type.go:168] "Request Body" body=""
	I1211 00:13:27.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:27.905485   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.405246   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.405317   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.405598   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:28.904844   39129 type.go:168] "Request Body" body=""
	I1211 00:13:28.904917   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:28.905225   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:29.405028   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.405117   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.405404   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:29.405449   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:29.905168   39129 type.go:168] "Request Body" body=""
	I1211 00:13:29.905247   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:29.905504   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.405258   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.405331   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.405639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:30.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:13:30.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:30.904795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.404468   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.404537   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:31.904793   39129 type.go:168] "Request Body" body=""
	I1211 00:13:31.904867   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:31.905218   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:31.905275   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:32.405039   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.405110   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.405458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:32.905101   39129 type.go:168] "Request Body" body=""
	I1211 00:13:32.905197   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:32.905510   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.405238   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.405316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:33.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:13:33.905361   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:33.905671   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:33.905728   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:34.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.404382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.404620   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:34.904316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:34.904389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:34.904718   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.404440   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.404512   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:35.904617   39129 type.go:168] "Request Body" body=""
	I1211 00:13:35.904692   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:35.908415   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:13:35.908524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:36.404445   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:36.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:36.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:36.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.404692   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.404758   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.405006   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:37.904670   39129 type.go:168] "Request Body" body=""
	I1211 00:13:37.904745   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:37.905089   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:38.404921   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.404992   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.405353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:38.405405   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:38.905139   39129 type.go:168] "Request Body" body=""
	I1211 00:13:38.905213   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:38.905467   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.405228   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.405305   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.405652   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:39.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:13:39.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:39.904766   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.404302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.404373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:40.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:13:40.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:40.904753   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:40.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:41.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.404438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.404779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:41.904302   39129 type.go:168] "Request Body" body=""
	I1211 00:13:41.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:41.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:42.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:13:42.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:42.904720   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:43.404315   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.404385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.404647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:43.404698   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:43.904384   39129 type.go:168] "Request Body" body=""
	I1211 00:13:43.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:43.904781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.404476   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:44.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:13:44.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:44.904695   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:45.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.404787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:45.404841   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:45.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:13:45.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:45.904815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.404520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:46.904418   39129 type.go:168] "Request Body" body=""
	I1211 00:13:46.904492   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:46.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.404396   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:47.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:13:47.904405   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:47.904688   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:47.904737   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:48.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.404478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.404831   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:48.904535   39129 type.go:168] "Request Body" body=""
	I1211 00:13:48.904627   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:48.904963   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.404424   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.404502   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:49.904361   39129 type.go:168] "Request Body" body=""
	I1211 00:13:49.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:49.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:49.904846   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:50.404484   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.404567   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:50.904312   39129 type.go:168] "Request Body" body=""
	I1211 00:13:50.904381   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:50.904631   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.404397   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.404781   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:51.904771   39129 type.go:168] "Request Body" body=""
	I1211 00:13:51.904845   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:51.905178   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:51.905230   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:52.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.404412   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:52.904443   39129 type.go:168] "Request Body" body=""
	I1211 00:13:52.904515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:52.904867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.404636   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.404950   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:53.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:13:53.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:53.904654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:54.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:54.904552   39129 type.go:168] "Request Body" body=""
	I1211 00:13:54.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:54.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.404658   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.404733   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.405025   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:55.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:13:55.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:55.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:56.404581   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.404661   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.404984   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:56.405049   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:56.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:13:56.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.404988   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.405064   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.405398   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:57.905216   39129 type.go:168] "Request Body" body=""
	I1211 00:13:57.905310   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:57.905575   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.404252   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.404323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.404664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:58.904398   39129 type.go:168] "Request Body" body=""
	I1211 00:13:58.904491   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:58.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:13:58.904844   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:13:59.404513   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.404873   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:13:59.904538   39129 type.go:168] "Request Body" body=""
	I1211 00:13:59.904626   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:13:59.904952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.404414   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.404845   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:00.904385   39129 type.go:168] "Request Body" body=""
	I1211 00:14:00.904466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:00.904782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:01.404336   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.404419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.404702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:01.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:01.904380   39129 type.go:168] "Request Body" body=""
	I1211 00:14:01.904471   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:01.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.404443   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.404756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:02.904253   39129 type.go:168] "Request Body" body=""
	I1211 00:14:02.904328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:02.904579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.404289   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.404365   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:03.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:03.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:03.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:03.904802   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:04.404326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.404677   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:04.904415   39129 type.go:168] "Request Body" body=""
	I1211 00:14:04.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:04.904786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.404464   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:05.904294   39129 type.go:168] "Request Body" body=""
	I1211 00:14:05.904370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:05.904638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:06.404572   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.404651   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.404978   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:06.405038   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:06.904928   39129 type.go:168] "Request Body" body=""
	I1211 00:14:06.905005   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:06.905317   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.405083   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.405159   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:07.905191   39129 type.go:168] "Request Body" body=""
	I1211 00:14:07.905272   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:07.905606   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:08.405305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.405379   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.405705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:08.405759   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:08.904405   39129 type.go:168] "Request Body" body=""
	I1211 00:14:08.904478   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:08.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.404479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.404557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.404900   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:09.904480   39129 type.go:168] "Request Body" body=""
	I1211 00:14:09.904557   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:09.904874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.404321   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.404389   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.404716   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:10.904442   39129 type.go:168] "Request Body" body=""
	I1211 00:14:10.904520   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:10.904925   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:10.904988   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:11.404652   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.404728   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.405053   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:11.904891   39129 type.go:168] "Request Body" body=""
	I1211 00:14:11.904965   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:11.905216   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.405049   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.405126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.405453   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:12.905247   39129 type.go:168] "Request Body" body=""
	I1211 00:14:12.905323   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:12.905654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:12.905713   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:13.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.404632   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:13.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:14:13.904402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:13.904741   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.404802   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:14.904475   39129 type.go:168] "Request Body" body=""
	I1211 00:14:14.904544   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:15.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.404755   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:15.404807   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:15.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:14:15.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:15.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.404677   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.404753   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.405004   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:16.904978   39129 type.go:168] "Request Body" body=""
	I1211 00:14:16.905048   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:16.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:17.405158   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.405235   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.405552   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:17.405610   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:17.904261   39129 type.go:168] "Request Body" body=""
	I1211 00:14:17.904334   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:17.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.404411   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.404498   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.404847   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:18.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:14:18.904472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:18.904738   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.404325   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.404737   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:19.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:14:19.904453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:19.904768   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:19.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:20.404377   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.404818   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:20.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:20.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:20.904822   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.404460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.404763   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:21.904387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:21.904470   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:21.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:22.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.404422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.404708   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:22.404753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:22.904479   39129 type.go:168] "Request Body" body=""
	I1211 00:14:22.904556   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:22.904841   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.404574   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:23.904305   39129 type.go:168] "Request Body" body=""
	I1211 00:14:23.904373   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:23.904664   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.404251   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.404672   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:24.904409   39129 type.go:168] "Request Body" body=""
	I1211 00:14:24.904486   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:24.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:24.904887   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:25.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.404461   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.404736   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:25.904509   39129 type.go:168] "Request Body" body=""
	I1211 00:14:25.904583   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:25.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.404731   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.404818   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.405155   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:26.904985   39129 type.go:168] "Request Body" body=""
	I1211 00:14:26.905061   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:26.905327   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:26.905366   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:27.405132   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.405207   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:27.905312   39129 type.go:168] "Request Body" body=""
	I1211 00:14:27.905383   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:27.905699   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.404639   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:28.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:14:28.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:28.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:29.404330   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.404404   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:29.404817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:29.904445   39129 type.go:168] "Request Body" body=""
	I1211 00:14:29.904517   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:29.904836   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.404365   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.404772   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:30.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:14:30.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:30.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:31.404452   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.404538   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.404813   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:31.404867   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:31.904825   39129 type.go:168] "Request Body" body=""
	I1211 00:14:31.904902   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:31.905256   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.405133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.405434   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:32.905146   39129 type.go:168] "Request Body" body=""
	I1211 00:14:32.905216   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:32.905460   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:33.405223   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.405303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.405614   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:33.405669   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:33.904300   39129 type.go:168] "Request Body" body=""
	I1211 00:14:33.904380   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:33.904714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.404393   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.404468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.404719   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:34.904353   39129 type.go:168] "Request Body" body=""
	I1211 00:14:34.904427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:34.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.404347   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.404418   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:35.904262   39129 type.go:168] "Request Body" body=""
	I1211 00:14:35.904332   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:35.904642   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:35.904703   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:36.404548   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.404942   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:36.904920   39129 type.go:168] "Request Body" body=""
	I1211 00:14:36.905001   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:36.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.405180   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.405250   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.405549   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:37.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:14:37.904398   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:37.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:37.904735   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:38.404400   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.404798   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:38.904471   39129 type.go:168] "Request Body" body=""
	I1211 00:14:38.904540   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:38.904868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.404349   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.404421   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.404739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:39.904326   39129 type.go:168] "Request Body" body=""
	I1211 00:14:39.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:39.904687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:40.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.404400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.404655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:40.404705   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:40.904360   39129 type.go:168] "Request Body" body=""
	I1211 00:14:40.904435   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:40.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.404350   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.404427   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.404749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:41.904650   39129 type.go:168] "Request Body" body=""
	I1211 00:14:41.904717   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:41.904964   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:42.404693   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.404775   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.405115   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:42.405176   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:42.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:14:42.905044   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:42.905384   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.405173   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.405244   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.405506   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:43.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:43.904350   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:43.904709   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.404387   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:44.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:14:44.904566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:44.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:44.904856   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:45.404389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.404848   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:45.904373   39129 type.go:168] "Request Body" body=""
	I1211 00:14:45.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:45.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.404605   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.404878   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:46.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:14:46.905004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:46.905351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:46.905406   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:47.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.405597   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:47.904278   39129 type.go:168] "Request Body" body=""
	I1211 00:14:47.904346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:47.904600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.404314   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.404401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:48.904431   39129 type.go:168] "Request Body" body=""
	I1211 00:14:48.904530   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:48.904960   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:49.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.404840   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:49.404917   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:49.904389   39129 type.go:168] "Request Body" body=""
	I1211 00:14:49.904479   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:49.904823   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.404429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.404707   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:50.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:14:50.904445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:50.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:51.904397   39129 type.go:168] "Request Body" body=""
	I1211 00:14:51.904467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:51.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:51.904876   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:52.404539   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.404611   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.404868   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:52.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:14:52.904488   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:52.904829   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.404507   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.404587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.404909   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:53.904363   39129 type.go:168] "Request Body" body=""
	I1211 00:14:53.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:53.904751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:54.404346   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:54.404785   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:54.905091   39129 type.go:168] "Request Body" body=""
	I1211 00:14:54.905164   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:54.905461   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.405218   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.405287   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.405536   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:55.905310   39129 type.go:168] "Request Body" body=""
	I1211 00:14:55.905400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:55.905792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:56.404664   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.404738   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.405079   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:56.405134   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:56.904863   39129 type.go:168] "Request Body" body=""
	I1211 00:14:56.904929   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:56.905177   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.404950   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.405032   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.405383   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:57.905061   39129 type.go:168] "Request Body" body=""
	I1211 00:14:57.905135   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:57.905490   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:58.405233   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.405306   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.405559   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:14:58.405605   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:14:58.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:14:58.904345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:58.904683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.404404   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.404487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.404786   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:14:59.904269   39129 type.go:168] "Request Body" body=""
	I1211 00:14:59.904338   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:14:59.904596   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.404353   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:00.904439   39129 type.go:168] "Request Body" body=""
	I1211 00:15:00.904522   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:00.904908   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:00.904971   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:01.404441   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.404521   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.404833   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:01.904840   39129 type.go:168] "Request Body" body=""
	I1211 00:15:01.904916   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:01.905261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.405074   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.405158   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.405505   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:02.905255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:02.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:02.905626   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:02.905685   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:03.404401   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:03.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:03.904501   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:03.904794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.404320   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.404396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.404697   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:04.904287   39129 type.go:168] "Request Body" body=""
	I1211 00:15:04.904363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:04.904668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:05.404328   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:05.404809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:05.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:15:05.904390   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:05.904646   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.404538   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.404621   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.404968   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:06.905001   39129 type.go:168] "Request Body" body=""
	I1211 00:15:06.905084   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:06.905399   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:07.405134   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.405202   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.405455   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:07.405496   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:07.905236   39129 type.go:168] "Request Body" body=""
	I1211 00:15:07.905316   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:07.905668   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.404259   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.404335   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.404669   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:08.904348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:08.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:08.904675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.404767   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:09.904456   39129 type.go:168] "Request Body" body=""
	I1211 00:15:09.904528   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:09.904872   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:09.904926   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:10.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.404420   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.404687   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:10.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:15:10.904438   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:10.904817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.404399   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.404474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.404821   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:11.904304   39129 type.go:168] "Request Body" body=""
	I1211 00:15:11.904386   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:11.904651   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:12.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.404458   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.404820   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:12.404875   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:12.904549   39129 type.go:168] "Request Body" body=""
	I1211 00:15:12.904630   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:12.904969   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.405324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.405622   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:13.904354   39129 type.go:168] "Request Body" body=""
	I1211 00:15:13.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:13.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.404751   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:14.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:15:14.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:14.904812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:14.904865   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:15.404372   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.404803   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:15.904376   39129 type.go:168] "Request Body" body=""
	I1211 00:15:15.904456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:15.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.404554   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:16.904714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:16.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:16.905117   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:16.905186   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:17.404926   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.404997   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.405333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:17.905100   39129 type.go:168] "Request Body" body=""
	I1211 00:15:17.905177   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:17.905446   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.405312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.405665   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:18.904275   39129 type.go:168] "Request Body" body=""
	I1211 00:15:18.904355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:18.904724   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:19.404415   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.404483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.404828   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:19.404886   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:19.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:19.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:19.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.404921   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:20.904600   39129 type.go:168] "Request Body" body=""
	I1211 00:15:20.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:20.904972   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:21.404647   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.405062   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:21.405116   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:21.904955   39129 type.go:168] "Request Body" body=""
	I1211 00:15:21.905031   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:21.905358   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.405138   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.405205   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:22.905211   39129 type.go:168] "Request Body" body=""
	I1211 00:15:22.905339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:22.905644   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.404356   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.404765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:23.904327   39129 type.go:168] "Request Body" body=""
	I1211 00:15:23.904406   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:23.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:23.904809   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:24.404449   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.404526   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.404844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:24.904567   39129 type.go:168] "Request Body" body=""
	I1211 00:15:24.904647   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:24.904980   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.404591   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.404896   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:25.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:15:25.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:25.904773   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:25.904830   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:26.404543   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.404619   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.404952   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:26.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:15:26.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:26.905041   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.404714   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.404795   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.405098   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:27.904869   39129 type.go:168] "Request Body" body=""
	I1211 00:15:27.904942   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:27.905254   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:27.905309   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:28.405022   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.405096   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.405402   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:28.905177   39129 type.go:168] "Request Body" body=""
	I1211 00:15:28.905254   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.404313   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.404393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.404703   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:29.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:15:29.904395   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:29.904647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:30.404357   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.404434   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.404735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:30.404784   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:30.904438   39129 type.go:168] "Request Body" body=""
	I1211 00:15:30.904510   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:30.904846   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.404410   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.404482   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.404742   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:31.904715   39129 type.go:168] "Request Body" body=""
	I1211 00:15:31.904789   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:31.905138   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:32.404902   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.404973   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.405298   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:32.405356   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:32.905028   39129 type.go:168] "Request Body" body=""
	I1211 00:15:32.905100   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:32.905353   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.405141   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.405225   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.405565   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:33.904332   39129 type.go:168] "Request Body" body=""
	I1211 00:15:33.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:33.904778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.404473   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.404543   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.404861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:34.904347   39129 type.go:168] "Request Body" body=""
	I1211 00:15:34.904417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:34.904758   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:34.904828   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:35.404492   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.404570   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.404888   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:35.904565   39129 type.go:168] "Request Body" body=""
	I1211 00:15:35.904641   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:35.904947   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.404649   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.404729   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.405029   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:36.904816   39129 type.go:168] "Request Body" body=""
	I1211 00:15:36.904901   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:36.905206   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:36.905255   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:37.404887   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.404952   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.405287   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:37.904915   39129 type.go:168] "Request Body" body=""
	I1211 00:15:37.904985   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:37.905278   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.405054   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.405464   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:38.905056   39129 type.go:168] "Request Body" body=""
	I1211 00:15:38.905124   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:38.905378   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:38.905418   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:39.405219   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.405292   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.405647   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:39.904336   39129 type.go:168] "Request Body" body=""
	I1211 00:15:39.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:39.904756   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.404291   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.404356   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.404607   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:40.904308   39129 type.go:168] "Request Body" body=""
	I1211 00:15:40.904384   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:40.904700   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:41.404428   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.404503   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:41.404925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:41.904306   39129 type.go:168] "Request Body" body=""
	I1211 00:15:41.904378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:41.904685   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.404364   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.404440   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:42.904345   39129 type.go:168] "Request Body" body=""
	I1211 00:15:42.904426   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:42.904796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:43.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.405484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:43.405524   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:43.905240   39129 type.go:168] "Request Body" body=""
	I1211 00:15:43.905312   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:43.905656   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.404466   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.404771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:44.904323   39129 type.go:168] "Request Body" body=""
	I1211 00:15:44.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:44.904706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.404373   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.404764   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:45.904503   39129 type.go:168] "Request Body" body=""
	I1211 00:15:45.904579   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:45.904930   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:45.905003   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:46.404675   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.404755   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.405031   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:46.904971   39129 type.go:168] "Request Body" body=""
	I1211 00:15:46.905045   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:46.905387   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.405184   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.405266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.405600   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:47.904290   39129 type.go:168] "Request Body" body=""
	I1211 00:15:47.904358   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:47.904588   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:48.404282   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.404369   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.404778   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:48.404837   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:48.904518   39129 type.go:168] "Request Body" body=""
	I1211 00:15:48.904616   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:48.904965   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.404851   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:49.904395   39129 type.go:168] "Request Body" body=""
	I1211 00:15:49.904468   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:49.904810   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.404425   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.404752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:50.904414   39129 type.go:168] "Request Body" body=""
	I1211 00:15:50.904489   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:50.904743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:50.904792   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:51.404343   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.404431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.404705   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:51.904585   39129 type.go:168] "Request Body" body=""
	I1211 00:15:51.904663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:51.904998   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.404551   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.404622   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.404875   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:52.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:15:52.904408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:52.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:53.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.404423   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.404762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:53.404819   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:53.904321   39129 type.go:168] "Request Body" body=""
	I1211 00:15:53.904393   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:53.904670   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.404382   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.404457   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.404837   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:54.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:15:54.904497   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:54.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.404310   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:55.904444   39129 type.go:168] "Request Body" body=""
	I1211 00:15:55.904531   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:55.904876   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:55.904925   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:56.404575   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.404645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.404977   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:56.904836   39129 type.go:168] "Request Body" body=""
	I1211 00:15:56.904913   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:56.905188   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.404951   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.405027   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.405355   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:57.905048   39129 type.go:168] "Request Body" body=""
	I1211 00:15:57.905133   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:57.905458   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:15:57.905511   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:15:58.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.405228   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:58.905188   39129 type.go:168] "Request Body" body=""
	I1211 00:15:58.905266   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:58.905560   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.404349   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.404684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:15:59.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:15:59.904359   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:15:59.904655   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:00.404418   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.404515   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.404905   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:00.404957   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:00.904950   39129 type.go:168] "Request Body" body=""
	I1211 00:16:00.905035   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:00.905354   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.405093   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.405167   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.405430   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:01.904359   39129 type.go:168] "Request Body" body=""
	I1211 00:16:01.904433   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:01.904790   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:02.404498   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.404580   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:02.404978   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:02.904523   39129 type.go:168] "Request Body" body=""
	I1211 00:16:02.904595   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:02.904914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.404378   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.404450   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.404782   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:03.904497   39129 type.go:168] "Request Body" body=""
	I1211 00:16:03.904572   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:03.904928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.404535   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.404607   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.404926   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:04.904614   39129 type.go:168] "Request Body" body=""
	I1211 00:16:04.904693   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:04.905032   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:04.905090   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:05.404755   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.404828   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.405160   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:05.904838   39129 type.go:168] "Request Body" body=""
	I1211 00:16:05.904911   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:05.905161   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.405082   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.405156   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.405465   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:06.904402   39129 type.go:168] "Request Body" body=""
	I1211 00:16:06.904483   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:06.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:07.404388   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.404456   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.404812   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:07.404870   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:07.904508   39129 type.go:168] "Request Body" body=""
	I1211 00:16:07.904584   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:07.904913   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.404619   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.404701   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.405096   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:08.904878   39129 type.go:168] "Request Body" body=""
	I1211 00:16:08.904962   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:08.905267   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:09.405061   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.405142   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.405475   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:09.405528   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:09.905172   39129 type.go:168] "Request Body" body=""
	I1211 00:16:09.905256   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:09.905577   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.404232   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.404303   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.404612   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:10.904313   39129 type.go:168] "Request Body" body=""
	I1211 00:16:10.904391   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:10.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.404435   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.404506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.404826   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:11.904829   39129 type.go:168] "Request Body" body=""
	I1211 00:16:11.904896   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:11.905193   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:11.905238   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:12.405036   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.405112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.405469   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:12.905320   39129 type.go:168] "Request Body" body=""
	I1211 00:16:12.905392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:12.905721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.404376   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.404448   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.404732   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:13.904346   39129 type.go:168] "Request Body" body=""
	I1211 00:16:13.904422   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:13.904775   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:14.404375   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.404459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.404817   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:14.404880   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:14.904559   39129 type.go:168] "Request Body" body=""
	I1211 00:16:14.904633   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:14.904931   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.404449   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.404770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:15.904365   39129 type.go:168] "Request Body" body=""
	I1211 00:16:15.904442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:15.904762   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:16.404605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.404672   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.404941   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:16.404981   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:16.905031   39129 type.go:168] "Request Body" body=""
	I1211 00:16:16.905112   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:16.905444   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.405328   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.405654   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:17.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:17.904392   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:17.904661   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.404405   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.404476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.404815   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:18.904504   39129 type.go:168] "Request Body" body=""
	I1211 00:16:18.904587   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:18.904897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:18.904955   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:19.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.404413   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.404743   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:19.904334   39129 type.go:168] "Request Body" body=""
	I1211 00:16:19.904407   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:19.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.404391   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.404467   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:20.904324   39129 type.go:168] "Request Body" body=""
	I1211 00:16:20.904396   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:20.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:21.404369   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.404792   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:21.404845   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:21.904335   39129 type.go:168] "Request Body" body=""
	I1211 00:16:21.904419   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:21.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.404420   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.404499   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:22.904477   39129 type.go:168] "Request Body" body=""
	I1211 00:16:22.904552   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:22.904882   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:23.404436   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.404513   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.404842   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:23.404895   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:23.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:23.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:23.904727   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.404381   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.404453   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.404796   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:24.904499   39129 type.go:168] "Request Body" body=""
	I1211 00:16:24.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:24.904892   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:25.404586   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.404659   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.404922   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:25.404961   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:25.904374   39129 type.go:168] "Request Body" body=""
	I1211 00:16:25.904447   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:25.904779   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.404518   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.404592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.404907   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:26.904867   39129 type.go:168] "Request Body" body=""
	I1211 00:16:26.904933   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:26.905185   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:27.404948   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.405019   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.405351   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:27.405411   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:27.904945   39129 type.go:168] "Request Body" body=""
	I1211 00:16:27.905014   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:27.905324   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.405117   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.405196   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.405450   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:28.905212   39129 type.go:168] "Request Body" body=""
	I1211 00:16:28.905284   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:28.905568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.404272   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.404346   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.404683   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:29.904271   39129 type.go:168] "Request Body" body=""
	I1211 00:16:29.904353   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:29.904752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:29.904817   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:30.404469   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.404548   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.404898   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:30.904597   39129 type.go:168] "Request Body" body=""
	I1211 00:16:30.904677   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:30.904983   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.404322   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.404746   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:31.904776   39129 type.go:168] "Request Body" body=""
	I1211 00:16:31.904848   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:31.905203   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:31.905262   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:32.405016   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.405089   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.405412   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:32.905193   39129 type.go:168] "Request Body" body=""
	I1211 00:16:32.905263   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:32.905517   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.405261   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.405339   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.405640   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:33.905301   39129 type.go:168] "Request Body" body=""
	I1211 00:16:33.905376   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:33.905691   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:33.905753   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:34.404316   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.404387   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.404734   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:34.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:16:34.904476   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:34.904832   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.404558   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.404634   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.404928   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:35.904571   39129 type.go:168] "Request Body" body=""
	I1211 00:16:35.904645   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:35.904953   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:36.404628   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.405010   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:36.405063   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:36.904947   39129 type.go:168] "Request Body" body=""
	I1211 00:16:36.905022   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:36.905335   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.405100   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.405170   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.405498   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:37.905264   39129 type.go:168] "Request Body" body=""
	I1211 00:16:37.905342   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:37.905865   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:38.404591   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.404679   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.405054   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:38.405110   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:38.904319   39129 type.go:168] "Request Body" body=""
	I1211 00:16:38.904397   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:38.904721   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.404383   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.404480   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.404794   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:39.904517   39129 type.go:168] "Request Body" body=""
	I1211 00:16:39.904592   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:39.904916   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.404360   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.404472   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.404740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:40.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:16:40.904441   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:40.904735   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:40.904783   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:41.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.404789   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:41.904722   39129 type.go:168] "Request Body" body=""
	I1211 00:16:41.904798   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:41.905060   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.404368   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.404496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:42.904377   39129 type.go:168] "Request Body" body=""
	I1211 00:16:42.904454   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:42.904769   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:42.904822   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:43.404423   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.404505   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.404801   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:43.904350   39129 type.go:168] "Request Body" body=""
	I1211 00:16:43.904451   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:43.904759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.404461   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.404566   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.404866   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:44.904550   39129 type.go:168] "Request Body" body=""
	I1211 00:16:44.904637   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:44.904902   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:44.904951   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:45.404609   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.404709   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.404988   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:45.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:16:45.904460   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:45.904747   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.404515   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.404596   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.405024   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:46.904957   39129 type.go:168] "Request Body" body=""
	I1211 00:16:46.905029   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:46.905333   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:46.905384   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:47.405088   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.405172   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.405514   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:47.905049   39129 type.go:168] "Request Body" body=""
	I1211 00:16:47.905126   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:47.905389   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.405194   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.405268   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.405562   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:48.905286   39129 type.go:168] "Request Body" body=""
	I1211 00:16:48.905355   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:48.905692   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:48.905744   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:49.404266   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.404345   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.404628   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:49.904318   39129 type.go:168] "Request Body" body=""
	I1211 00:16:49.904401   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:49.904744   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.404462   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.404547   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.404858   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:50.904311   39129 type.go:168] "Request Body" body=""
	I1211 00:16:50.904382   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:50.904650   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:51.404329   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.404402   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.404729   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:51.404787   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:51.904747   39129 type.go:168] "Request Body" body=""
	I1211 00:16:51.904831   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:51.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.404930   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.405004   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.405261   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:52.904990   39129 type.go:168] "Request Body" body=""
	I1211 00:16:52.905058   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:52.905363   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:53.405161   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.405240   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.405638   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:53.405695   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:53.905292   39129 type.go:168] "Request Body" body=""
	I1211 00:16:53.905363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:53.905633   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.404339   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.404417   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.404809   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:54.904605   39129 type.go:168] "Request Body" body=""
	I1211 00:16:54.904687   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:54.905040   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.404740   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.404817   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.405074   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:55.904366   39129 type.go:168] "Request Body" body=""
	I1211 00:16:55.904446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:55.904834   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:55.904885   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:56.404641   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.404722   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.405063   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:56.904894   39129 type.go:168] "Request Body" body=""
	I1211 00:16:56.904963   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:56.905258   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.405073   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.405147   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.405497   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:57.905342   39129 type.go:168] "Request Body" body=""
	I1211 00:16:57.905415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:57.905765   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:16:57.905820   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:16:58.404453   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.404534   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.404874   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:58.904362   39129 type.go:168] "Request Body" body=""
	I1211 00:16:58.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:58.904770   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.404465   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.404914   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:16:59.904595   39129 type.go:168] "Request Body" body=""
	I1211 00:16:59.904668   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:16:59.904932   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:00.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.404598   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.404940   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:00.404990   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:00.904931   39129 type.go:168] "Request Body" body=""
	I1211 00:17:00.905010   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:00.905344   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.405157   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.405257   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.405546   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:01.904448   39129 type.go:168] "Request Body" body=""
	I1211 00:17:01.904523   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:01.904989   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:02.404570   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.404663   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.405065   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:02.405126   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:02.904936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:02.905055   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:02.905428   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.405280   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.405375   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.405771   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:03.904399   39129 type.go:168] "Request Body" body=""
	I1211 00:17:03.904496   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:03.904861   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.404348   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.404777   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:04.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:04.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:04.904791   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:04.904849   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:05.404438   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.404527   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.404897   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:05.904937   39129 type.go:168] "Request Body" body=""
	I1211 00:17:05.905042   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:05.905515   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.404643   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.404731   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.405084   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:06.904983   39129 type.go:168] "Request Body" body=""
	I1211 00:17:06.905060   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:06.905437   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:06.905502   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:07.405222   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.405297   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.405568   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:07.904329   39129 type.go:168] "Request Body" body=""
	I1211 00:17:07.904409   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:07.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.404456   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.404553   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.404970   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:08.904274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:08.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:08.904585   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:09.404270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.404363   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.404662   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:09.404710   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:09.904270   39129 type.go:168] "Request Body" body=""
	I1211 00:17:09.904341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:09.904653   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.405242   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.405315   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.405609   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:10.904309   39129 type.go:168] "Request Body" body=""
	I1211 00:17:10.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:10.904702   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:11.404370   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.404785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:11.404842   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:11.904759   39129 type.go:168] "Request Body" body=""
	I1211 00:17:11.904838   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:11.905118   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.404893   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.404967   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.405296   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:12.905088   39129 type.go:168] "Request Body" body=""
	I1211 00:17:12.905195   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:12.905511   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:13.405255   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.405329   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.405579   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:13.405619   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:13.904333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:13.904415   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:13.904761   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.404335   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.404408   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.404714   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:14.904317   39129 type.go:168] "Request Body" body=""
	I1211 00:17:14.904385   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:14.904641   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.404300   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.404377   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.404706   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:15.904411   39129 type.go:168] "Request Body" body=""
	I1211 00:17:15.904487   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:15.904783   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:15.904829   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:16.404506   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.404582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.404879   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:16.904777   39129 type.go:168] "Request Body" body=""
	I1211 00:17:16.904859   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:16.905166   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.404976   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.405047   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.405346   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:17.905085   39129 type.go:168] "Request Body" body=""
	I1211 00:17:17.905148   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:17.905484   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:17.905576   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:18.405320   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.405400   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.405752   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:18.904379   39129 type.go:168] "Request Body" body=""
	I1211 00:17:18.904452   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:18.904787   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.404450   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.404524   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.404839   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:19.904351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:19.904429   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:19.904740   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:20.404466   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.404542   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.404867   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:20.404929   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:20.904355   39129 type.go:168] "Request Body" body=""
	I1211 00:17:20.904431   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:20.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.404366   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.404442   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.404759   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:21.904369   39129 type.go:168] "Request Body" body=""
	I1211 00:17:21.904459   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:21.904816   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.404295   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.404368   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.404715   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:22.904407   39129 type.go:168] "Request Body" body=""
	I1211 00:17:22.904484   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:22.904811   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:22.904863   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:23.404333   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.404410   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.404731   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:23.904406   39129 type.go:168] "Request Body" body=""
	I1211 00:17:23.904475   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:23.904749   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.404371   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.404445   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.404774   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:24.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:24.904474   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:24.904844   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:24.904898   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:25.404303   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.404370   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.404698   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:25.904598   39129 type.go:168] "Request Body" body=""
	I1211 00:17:25.904676   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:25.905012   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.404650   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.404723   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.405090   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:26.904820   39129 type.go:168] "Request Body" body=""
	I1211 00:17:26.904890   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:26.905169   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:26.905212   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:27.404936   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.405018   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.405356   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:27.905133   39129 type.go:168] "Request Body" body=""
	I1211 00:17:27.905209   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:27.905529   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.405274   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.405341   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.405686   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:28.904356   39129 type.go:168] "Request Body" body=""
	I1211 00:17:28.904432   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:28.904739   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:29.404458   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.404541   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.404884   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:29.404943   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:29.904284   39129 type.go:168] "Request Body" body=""
	I1211 00:17:29.904367   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:29.904684   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.404386   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.404462   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.404795   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:30.904507   39129 type.go:168] "Request Body" body=""
	I1211 00:17:30.904582   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:30.904891   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.404374   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.404446   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.404750   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:31.904703   39129 type.go:168] "Request Body" body=""
	I1211 00:17:31.904772   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:31.908235   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1211 00:17:31.908301   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:32.405044   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.405123   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.405443   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:32.905095   39129 type.go:168] "Request Body" body=""
	I1211 00:17:32.905166   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:32.905421   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.405170   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.405251   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.405557   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:33.905249   39129 type.go:168] "Request Body" body=""
	I1211 00:17:33.905324   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:33.905635   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:34.404308   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.404378   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.404675   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:34.404722   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:34.904367   39129 type.go:168] "Request Body" body=""
	I1211 00:17:34.904444   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:34.904788   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.404351   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.404430   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.404757   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:35.904434   39129 type.go:168] "Request Body" body=""
	I1211 00:17:35.904506   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:35.904785   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:36.404582   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.404662   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.404987   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1211 00:17:36.405043   39129 node_ready.go:55] error getting node "functional-786978" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-786978": dial tcp 192.168.49.2:8441: connect: connection refused
	I1211 00:17:36.904375   39129 type.go:168] "Request Body" body=""
	I1211 00:17:36.904469   39129 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-786978" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1211 00:17:36.904799   39129 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1211 00:17:37.404341   39129 type.go:168] "Request Body" body=""
	I1211 00:17:37.404399   39129 node_ready.go:38] duration metric: took 6m0.000266247s for node "functional-786978" to be "Ready" ...
	I1211 00:17:37.407624   39129 out.go:203] 
	W1211 00:17:37.410619   39129 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1211 00:17:37.410819   39129 out.go:285] * 
	W1211 00:17:37.413036   39129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:17:37.415867   39129 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:17:46 functional-786978 crio[5370]: time="2025-12-11T00:17:46.590803114Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f23ab746-5e09-454b-b24c-0b20fc05e27d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678058271Z" level=info msg="Checking image status: minikube-local-cache-test:functional-786978" id=de1b9260-4270-4c82-a556-4da52de7aaa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678233898Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678278469Z" level=info msg="Image minikube-local-cache-test:functional-786978 not found" id=de1b9260-4270-4c82-a556-4da52de7aaa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.678349716Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-786978 found" id=de1b9260-4270-4c82-a556-4da52de7aaa4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.701597253Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-786978" id=3702fa98-e72c-4791-85ca-7a37ea3c58f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.701742823Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-786978 not found" id=3702fa98-e72c-4791-85ca-7a37ea3c58f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.701784071Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-786978 found" id=3702fa98-e72c-4791-85ca-7a37ea3c58f1 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.727359319Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-786978" id=0b3d3d42-04c9-47d7-ad28-0ea224eaa5de name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.727499048Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-786978 not found" id=0b3d3d42-04c9-47d7-ad28-0ea224eaa5de name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:47 functional-786978 crio[5370]: time="2025-12-11T00:17:47.727540353Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-786978 found" id=0b3d3d42-04c9-47d7-ad28-0ea224eaa5de name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:48 functional-786978 crio[5370]: time="2025-12-11T00:17:48.720794223Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=6a85b45b-7853-403d-b6ec-d04782984a27 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.05478202Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c1743bf4-c4a4-47b4-af83-81ad5ecdd1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.054933146Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c1743bf4-c4a4-47b4-af83-81ad5ecdd1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.055097326Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c1743bf4-c4a4-47b4-af83-81ad5ecdd1bb name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.663448709Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b3dca044-c6c4-4a84-bb69-320442f6d378 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.663611642Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=b3dca044-c6c4-4a84-bb69-320442f6d378 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.663660817Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=b3dca044-c6c4-4a84-bb69-320442f6d378 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.686768765Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=d0c9f349-6e60-4f21-a5aa-0d3aed676c9b name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.686929852Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=d0c9f349-6e60-4f21-a5aa-0d3aed676c9b name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.687160389Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=d0c9f349-6e60-4f21-a5aa-0d3aed676c9b name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.71135857Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=269b143d-89ae-42d3-8432-96192751bf0f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.711512108Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=269b143d-89ae-42d3-8432-96192751bf0f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:49 functional-786978 crio[5370]: time="2025-12-11T00:17:49.711559978Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=269b143d-89ae-42d3-8432-96192751bf0f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:17:50 functional-786978 crio[5370]: time="2025-12-11T00:17:50.262468188Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=32fde6db-d5df-4745-80a7-7d1fc07583ef name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:17:54.238048    9551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:54.238703    9551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:54.240254    9551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:54.240720    9551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:17:54.242255    9551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:17:54 up 29 min,  0 user,  load average: 0.41, 0.31, 0.47
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:17:51 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:52 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 11 00:17:52 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:52 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:52 functional-786978 kubelet[9426]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:52 functional-786978 kubelet[9426]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:52 functional-786978 kubelet[9426]: E1211 00:17:52.484091    9426 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:52 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:52 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:53 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 831.
	Dec 11 00:17:53 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:53 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:53 functional-786978 kubelet[9448]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:53 functional-786978 kubelet[9448]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:53 functional-786978 kubelet[9448]: E1211 00:17:53.207412    9448 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:53 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:53 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:17:53 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 832.
	Dec 11 00:17:53 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:53 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:17:53 functional-786978 kubelet[9473]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:53 functional-786978 kubelet[9473]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:17:53 functional-786978 kubelet[9473]: E1211 00:17:53.959124    9473 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:17:53 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:17:53 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (353.955852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-786978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1211 00:20:09.653227    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:22:21.216401    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:23:44.280510    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:25:09.654847    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:27:21.216749    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-786978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.687010256s)

                                                
                                                
-- stdout --
	* [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182146s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-786978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.688187827s for "functional-786978" cluster.
I1211 00:30:07.980780    4875 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (310.786409ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-976823 image ls --format json --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ ssh     │ functional-976823 ssh pgrep buildkitd                                                                                                             │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ image   │ functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr                                            │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls                                                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format yaml --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format table --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ delete  │ -p functional-976823                                                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ start   │ -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ start   │ -p functional-786978 --alsologtostderr -v=8                                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:11 UTC │                     │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:latest                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add minikube-local-cache-test:functional-786978                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache delete minikube-local-cache-test:functional-786978                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl images                                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ cache   │ functional-786978 cache reload                                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ kubectl │ functional-786978 kubectl -- --context functional-786978 get pods                                                                                 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ start   │ -p functional-786978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:17:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:17:55.340423   45025 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:17:55.340537   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340541   45025 out.go:374] Setting ErrFile to fd 2...
	I1211 00:17:55.340544   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340791   45025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:17:55.341139   45025 out.go:368] Setting JSON to false
	I1211 00:17:55.342235   45025 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1762,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:17:55.342290   45025 start.go:143] virtualization:  
	I1211 00:17:55.345626   45025 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:17:55.349437   45025 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:17:55.349518   45025 notify.go:221] Checking for updates...
	I1211 00:17:55.355612   45025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:17:55.358489   45025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:17:55.361319   45025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:17:55.364268   45025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:17:55.367246   45025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:17:55.370742   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:55.370850   45025 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:17:55.397690   45025 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:17:55.397801   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.502686   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.493021097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.502775   45025 docker.go:319] overlay module found
	I1211 00:17:55.506026   45025 out.go:179] * Using the docker driver based on existing profile
	I1211 00:17:55.508857   45025 start.go:309] selected driver: docker
	I1211 00:17:55.508866   45025 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.508963   45025 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:17:55.509064   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.563622   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.55460881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.564041   45025 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 00:17:55.564074   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:55.564121   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:55.564168   45025 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.567337   45025 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:17:55.570124   45025 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:17:55.572957   45025 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:17:55.575721   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:55.575758   45025 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:17:55.575767   45025 cache.go:65] Caching tarball of preloaded images
	I1211 00:17:55.575808   45025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:17:55.575848   45025 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:17:55.575857   45025 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:17:55.575972   45025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:17:55.595069   45025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:17:55.595078   45025 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:17:55.595099   45025 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:17:55.595134   45025 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:17:55.595195   45025 start.go:364] duration metric: took 45.113µs to acquireMachinesLock for "functional-786978"
	I1211 00:17:55.595213   45025 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:17:55.595217   45025 fix.go:54] fixHost starting: 
	I1211 00:17:55.595484   45025 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:17:55.612234   45025 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:17:55.612254   45025 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:17:55.615553   45025 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:17:55.615576   45025 machine.go:94] provisionDockerMachine start ...
	I1211 00:17:55.615650   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.633023   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.633331   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.633337   45025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:17:55.782629   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.782643   45025 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:17:55.782717   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.800268   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.800560   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.800569   45025 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:17:55.960068   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.960134   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.979369   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.979668   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.979683   45025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:17:56.131539   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:17:56.131559   45025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:17:56.131581   45025 ubuntu.go:190] setting up certificates
	I1211 00:17:56.131589   45025 provision.go:84] configureAuth start
	I1211 00:17:56.131663   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:56.153195   45025 provision.go:143] copyHostCerts
	I1211 00:17:56.153275   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:17:56.153283   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:17:56.153368   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:17:56.153542   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:17:56.153546   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:17:56.153590   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:17:56.153677   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:17:56.153682   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:17:56.153707   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:17:56.153777   45025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:17:56.467494   45025 provision.go:177] copyRemoteCerts
	I1211 00:17:56.467553   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:17:56.467596   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.484090   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:56.587917   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:17:56.605865   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 00:17:56.622832   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:17:56.639884   45025 provision.go:87] duration metric: took 508.274173ms to configureAuth
	I1211 00:17:56.639901   45025 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:17:56.640097   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:56.640201   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.656951   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:56.657259   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:56.657272   45025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:17:57.016039   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:17:57.016056   45025 machine.go:97] duration metric: took 1.400473029s to provisionDockerMachine
	I1211 00:17:57.016068   45025 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:17:57.016080   45025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:17:57.016152   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:17:57.016210   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.035864   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.138938   45025 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:17:57.142378   45025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:17:57.142395   45025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:17:57.142405   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:17:57.142462   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:17:57.142546   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:17:57.142617   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:17:57.142658   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:17:57.149965   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:57.167412   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:17:57.184830   45025 start.go:296] duration metric: took 168.748285ms for postStartSetup
	I1211 00:17:57.184913   45025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:17:57.184954   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.203305   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.304245   45025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:17:57.309118   45025 fix.go:56] duration metric: took 1.713893936s for fixHost
	I1211 00:17:57.309133   45025 start.go:83] releasing machines lock for "functional-786978", held for 1.713931903s
	I1211 00:17:57.309206   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:57.326163   45025 ssh_runner.go:195] Run: cat /version.json
	I1211 00:17:57.326207   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.326441   45025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:17:57.326492   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.346150   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.355283   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.447048   45025 ssh_runner.go:195] Run: systemctl --version
	I1211 00:17:57.543733   45025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:17:57.583708   45025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 00:17:57.588962   45025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:17:57.589026   45025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:17:57.598123   45025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:17:57.598147   45025 start.go:496] detecting cgroup driver to use...
	I1211 00:17:57.598178   45025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:17:57.598242   45025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:17:57.616553   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:17:57.632037   45025 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:17:57.632116   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:17:57.648871   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:17:57.662555   45025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:17:57.780641   45025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:17:57.896253   45025 docker.go:234] disabling docker service ...
	I1211 00:17:57.896308   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:17:57.910709   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:17:57.923903   45025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:17:58.032234   45025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:17:58.154255   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:17:58.166925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:17:58.180565   45025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:17:58.180619   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.189311   45025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:17:58.189376   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.198596   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.207202   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.215908   45025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:17:58.223742   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.232864   45025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.241359   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.249993   45025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:17:58.257330   45025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:17:58.264525   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.395006   45025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:17:58.567132   45025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:17:58.567191   45025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:17:58.572106   45025 start.go:564] Will wait 60s for crictl version
	I1211 00:17:58.572166   45025 ssh_runner.go:195] Run: which crictl
	I1211 00:17:58.576600   45025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:17:58.605345   45025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:17:58.605434   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.635482   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.670505   45025 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:17:58.673486   45025 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:17:58.691254   45025 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:17:58.698413   45025 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1211 00:17:58.701098   45025 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:17:58.701227   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:58.701291   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.741056   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.741070   45025 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:17:58.741127   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.766313   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.766324   45025 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:17:58.766330   45025 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:17:58.766420   45025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:17:58.766498   45025 ssh_runner.go:195] Run: crio config
	I1211 00:17:58.831179   45025 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1211 00:17:58.831214   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:58.831224   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:58.831240   45025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:17:58.831262   45025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:17:58.831383   45025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:17:58.831452   45025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:17:58.839023   45025 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:17:58.839084   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:17:58.846528   45025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:17:58.859010   45025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:17:58.871952   45025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1211 00:17:58.884395   45025 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:17:58.888346   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.999004   45025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:17:59.014620   45025 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:17:59.014632   45025 certs.go:195] generating shared ca certs ...
	I1211 00:17:59.014647   45025 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:17:59.014834   45025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:17:59.014887   45025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:17:59.014894   45025 certs.go:257] generating profile certs ...
	I1211 00:17:59.015111   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:17:59.015168   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:17:59.015206   45025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:17:59.015330   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:17:59.015361   45025 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:17:59.015369   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:17:59.015399   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:17:59.015424   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:17:59.015449   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:17:59.015495   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:59.016236   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:17:59.036319   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:17:59.054207   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:17:59.085140   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:17:59.102589   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:17:59.119619   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:17:59.137775   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:17:59.155046   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:17:59.173200   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:17:59.191371   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:17:59.208847   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:17:59.225559   45025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:17:59.238258   45025 ssh_runner.go:195] Run: openssl version
	I1211 00:17:59.244279   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.251482   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:17:59.258806   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262560   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262615   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.303500   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:17:59.310986   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.318422   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:17:59.325839   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329190   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329239   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.369865   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:17:59.377731   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.385365   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:17:59.392850   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396464   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396534   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.437551   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:17:59.445097   45025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:17:59.449099   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:17:59.490493   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:17:59.531562   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:17:59.572726   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:17:59.613479   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:17:59.656606   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:17:59.697483   45025 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:59.697558   45025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:17:59.697631   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.726147   45025 cri.go:89] found id: ""
	I1211 00:17:59.726208   45025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:17:59.734119   45025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:17:59.734129   45025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:17:59.734181   45025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:17:59.741669   45025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.742193   45025 kubeconfig.go:125] found "functional-786978" server: "https://192.168.49.2:8441"
	I1211 00:17:59.743487   45025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:17:59.751799   45025 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-11 00:03:23.654512319 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-11 00:17:58.880060835 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1211 00:17:59.751819   45025 kubeadm.go:1161] stopping kube-system containers ...
	I1211 00:17:59.751836   45025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1211 00:17:59.751895   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.779633   45025 cri.go:89] found id: ""
	I1211 00:17:59.779698   45025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 00:17:59.796551   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:17:59.805010   45025 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 11 00:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 11 00:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 11 00:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 11 00:07 /etc/kubernetes/scheduler.conf
	
	I1211 00:17:59.805070   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:17:59.813093   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:17:59.820917   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.820973   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:17:59.828623   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.836494   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.836548   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.843945   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:17:59.851499   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.851553   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:17:59.859289   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:17:59.867193   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:17:59.916974   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.185880   45025 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.268883094s)
	I1211 00:18:02.185949   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.399533   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.467551   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.514148   45025 api_server.go:52] waiting for apiserver process to appear ...
	I1211 00:18:02.514234   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.014347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.515068   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.014554   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.515116   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.016511   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.515100   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.017684   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.515326   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.014433   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.515145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.014543   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.514950   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.015735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.514456   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.015825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.514630   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.015335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.514451   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.014804   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.514494   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.015458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.514452   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.014884   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.514333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.022420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.515034   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.017224   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.514464   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.015399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.514329   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.015271   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.017520   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.514376   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.017541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.515013   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.017761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.514358   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.014403   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.514344   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.017371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.515172   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.016422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.514490   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.020263   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.514922   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.014789   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.514345   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.015761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.514955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.018541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.514310   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.014448   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.514337   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.018852   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.515041   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.020888   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.514298   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.022333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.515045   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.014735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.514347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.017953   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.515070   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.015196   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.514355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.014375   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.514335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.014528   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.514323   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.014416   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.515174   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.014438   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.514458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.021545   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.514947   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.016088   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.514879   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.014943   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.514386   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.016904   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.515352   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.015231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.514894   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.014476   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.514778   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.016439   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.515114   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.014420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.514853   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.016610   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.514436   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.014585   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.514442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.014533   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.514763   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.016122   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.514418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.015418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.014702   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.515080   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.015415   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.514399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.016231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.514627   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.015154   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.515225   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.020324   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.514495   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.015016   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.514389   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.018412   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.515094   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.018157   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.515152   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.014878   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.514507   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.015181   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.514444   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:02.514543   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:02.540506   45025 cri.go:89] found id: ""
	I1211 00:19:02.540520   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.540528   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:02.540533   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:02.540593   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:02.567414   45025 cri.go:89] found id: ""
	I1211 00:19:02.567427   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.567434   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:02.567439   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:02.567500   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:02.598249   45025 cri.go:89] found id: ""
	I1211 00:19:02.598263   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.598270   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:02.598277   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:02.598348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:02.624793   45025 cri.go:89] found id: ""
	I1211 00:19:02.624807   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.624822   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:02.624828   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:02.624894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:02.654153   45025 cri.go:89] found id: ""
	I1211 00:19:02.654170   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.654177   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:02.654182   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:02.654251   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:02.682217   45025 cri.go:89] found id: ""
	I1211 00:19:02.682231   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.682239   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:02.682244   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:02.682304   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:02.708660   45025 cri.go:89] found id: ""
	I1211 00:19:02.708674   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.708682   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:02.708690   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:02.708700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:02.775902   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:02.775921   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:02.787446   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:02.787463   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:02.857001   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:02.857011   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:02.857022   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:02.927792   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:02.927812   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:05.458523   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:05.468377   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:05.468436   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:05.492943   45025 cri.go:89] found id: ""
	I1211 00:19:05.492957   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.492963   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:05.492968   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:05.493030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:05.520504   45025 cri.go:89] found id: ""
	I1211 00:19:05.520517   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.520525   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:05.520530   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:05.520592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:05.551505   45025 cri.go:89] found id: ""
	I1211 00:19:05.551518   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.551525   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:05.551531   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:05.551586   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:05.580658   45025 cri.go:89] found id: ""
	I1211 00:19:05.580672   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.580679   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:05.580683   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:05.580757   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:05.607012   45025 cri.go:89] found id: ""
	I1211 00:19:05.607026   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.607033   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:05.607038   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:05.607102   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:05.632061   45025 cri.go:89] found id: ""
	I1211 00:19:05.632075   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.632082   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:05.632087   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:05.632152   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:05.658481   45025 cri.go:89] found id: ""
	I1211 00:19:05.658494   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.658514   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:05.658522   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:05.658533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:05.724859   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:05.724876   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:05.735886   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:05.735901   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:05.798612   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:05.798622   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:05.798634   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:05.867342   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:05.867360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:08.400995   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:08.413387   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:08.413449   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:08.448131   45025 cri.go:89] found id: ""
	I1211 00:19:08.448144   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.448151   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:08.448157   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:08.448216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:08.477588   45025 cri.go:89] found id: ""
	I1211 00:19:08.477601   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.477608   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:08.477612   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:08.477671   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:08.502742   45025 cri.go:89] found id: ""
	I1211 00:19:08.502755   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.502763   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:08.502768   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:08.502826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:08.528585   45025 cri.go:89] found id: ""
	I1211 00:19:08.528598   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.528606   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:08.528611   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:08.528674   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:08.559543   45025 cri.go:89] found id: ""
	I1211 00:19:08.559557   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.559564   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:08.559569   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:08.559630   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:08.585362   45025 cri.go:89] found id: ""
	I1211 00:19:08.585377   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.585384   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:08.585390   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:08.585462   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:08.611828   45025 cri.go:89] found id: ""
	I1211 00:19:08.611842   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.611849   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:08.611856   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:08.611866   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:08.678470   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:08.678488   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:08.691361   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:08.691376   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:08.762621   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:08.762636   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:08.762649   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:08.832475   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:08.832493   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:11.361776   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:11.371640   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:11.371694   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:11.398476   45025 cri.go:89] found id: ""
	I1211 00:19:11.398489   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.398496   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:11.398501   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:11.398559   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:11.429955   45025 cri.go:89] found id: ""
	I1211 00:19:11.429969   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.429976   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:11.429982   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:11.430037   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:11.457296   45025 cri.go:89] found id: ""
	I1211 00:19:11.457309   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.457316   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:11.457324   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:11.457382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:11.482941   45025 cri.go:89] found id: ""
	I1211 00:19:11.482954   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.482962   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:11.483012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:11.483069   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:11.508408   45025 cri.go:89] found id: ""
	I1211 00:19:11.508431   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.508438   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:11.508443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:11.508510   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:11.533840   45025 cri.go:89] found id: ""
	I1211 00:19:11.533854   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.533869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:11.533875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:11.533950   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:11.559317   45025 cri.go:89] found id: ""
	I1211 00:19:11.559331   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.559338   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:11.559345   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:11.559354   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:11.626027   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:11.626045   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:11.637884   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:11.637900   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:11.704689   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:11.704700   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:11.704711   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:11.774803   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:11.774821   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.306913   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:14.318077   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:14.318146   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:14.343407   45025 cri.go:89] found id: ""
	I1211 00:19:14.343421   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.343428   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:14.343433   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:14.343497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:14.370322   45025 cri.go:89] found id: ""
	I1211 00:19:14.370336   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.370342   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:14.370348   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:14.370406   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:14.397449   45025 cri.go:89] found id: ""
	I1211 00:19:14.397462   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.397469   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:14.397474   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:14.397531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:14.430459   45025 cri.go:89] found id: ""
	I1211 00:19:14.430472   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.430479   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:14.430501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:14.430595   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:14.461756   45025 cri.go:89] found id: ""
	I1211 00:19:14.461769   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.461776   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:14.461781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:14.461849   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:14.488174   45025 cri.go:89] found id: ""
	I1211 00:19:14.488189   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.488196   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:14.488201   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:14.488258   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:14.517330   45025 cri.go:89] found id: ""
	I1211 00:19:14.517343   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.517350   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:14.517357   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:14.517368   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.549197   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:14.549215   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:14.618908   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:14.618926   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:14.630263   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:14.630279   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:14.698427   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:14.698437   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:14.698453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.273043   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:17.283257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:17.283323   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:17.308437   45025 cri.go:89] found id: ""
	I1211 00:19:17.308450   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.308457   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:17.308462   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:17.308522   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:17.337454   45025 cri.go:89] found id: ""
	I1211 00:19:17.337467   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.337474   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:17.337479   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:17.337538   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:17.363695   45025 cri.go:89] found id: ""
	I1211 00:19:17.363709   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.363717   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:17.363722   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:17.363781   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:17.388300   45025 cri.go:89] found id: ""
	I1211 00:19:17.388314   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.388321   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:17.388327   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:17.388383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:17.418934   45025 cri.go:89] found id: ""
	I1211 00:19:17.418947   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.418954   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:17.418959   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:17.419036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:17.453193   45025 cri.go:89] found id: ""
	I1211 00:19:17.453207   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.453214   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:17.453220   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:17.453308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:17.487806   45025 cri.go:89] found id: ""
	I1211 00:19:17.487820   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.487827   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:17.487834   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:17.487845   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:17.553739   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:17.553758   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:17.564920   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:17.564936   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:17.630666   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:17.630680   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:17.630705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.701596   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:17.701614   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:20.234880   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:20.244988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:20.245050   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:20.273088   45025 cri.go:89] found id: ""
	I1211 00:19:20.273101   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.273109   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:20.273114   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:20.273175   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:20.302062   45025 cri.go:89] found id: ""
	I1211 00:19:20.302076   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.302083   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:20.302089   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:20.302157   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:20.326827   45025 cri.go:89] found id: ""
	I1211 00:19:20.326841   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.326859   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:20.326865   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:20.326922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:20.356288   45025 cri.go:89] found id: ""
	I1211 00:19:20.356302   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.356309   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:20.356315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:20.356375   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:20.382358   45025 cri.go:89] found id: ""
	I1211 00:19:20.382373   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.382380   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:20.382386   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:20.382445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:20.417393   45025 cri.go:89] found id: ""
	I1211 00:19:20.417407   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.417424   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:20.417430   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:20.417488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:20.447521   45025 cri.go:89] found id: ""
	I1211 00:19:20.447534   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.447541   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:20.447550   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:20.447560   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:20.518467   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:20.518484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:20.530666   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:20.530681   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:20.599280   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:20.599290   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:20.599301   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:20.666760   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:20.666778   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.200454   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:23.210413   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:23.210471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:23.234734   45025 cri.go:89] found id: ""
	I1211 00:19:23.234748   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.234756   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:23.234761   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:23.234822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:23.260526   45025 cri.go:89] found id: ""
	I1211 00:19:23.260540   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.260547   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:23.260552   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:23.260611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:23.284278   45025 cri.go:89] found id: ""
	I1211 00:19:23.284291   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.284298   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:23.284303   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:23.284360   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:23.309416   45025 cri.go:89] found id: ""
	I1211 00:19:23.309431   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.309438   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:23.309443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:23.309502   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:23.335667   45025 cri.go:89] found id: ""
	I1211 00:19:23.335682   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.335689   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:23.335695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:23.335751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:23.364847   45025 cri.go:89] found id: ""
	I1211 00:19:23.364862   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.364869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:23.364875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:23.364941   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:23.389436   45025 cri.go:89] found id: ""
	I1211 00:19:23.389449   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.389457   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:23.389464   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:23.389477   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:23.402133   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:23.402149   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:23.484989   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:23.484999   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:23.485010   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:23.553567   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:23.553586   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.583342   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:23.583359   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.151360   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:26.161613   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:26.161676   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:26.187432   45025 cri.go:89] found id: ""
	I1211 00:19:26.187446   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.187453   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:26.187459   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:26.187514   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:26.212567   45025 cri.go:89] found id: ""
	I1211 00:19:26.212581   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.212588   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:26.212593   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:26.212650   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:26.238347   45025 cri.go:89] found id: ""
	I1211 00:19:26.238359   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.238367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:26.238372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:26.238426   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:26.264493   45025 cri.go:89] found id: ""
	I1211 00:19:26.264506   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.264513   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:26.264518   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:26.264578   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:26.289421   45025 cri.go:89] found id: ""
	I1211 00:19:26.289435   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.289442   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:26.289446   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:26.289512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:26.317737   45025 cri.go:89] found id: ""
	I1211 00:19:26.317751   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.317758   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:26.317776   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:26.317832   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:26.342012   45025 cri.go:89] found id: ""
	I1211 00:19:26.342025   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.342032   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:26.342039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:26.342049   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:26.409907   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:26.409925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:26.444709   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:26.444725   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.520673   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:26.520692   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:26.533201   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:26.533217   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:26.595360   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.096255   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:29.106290   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:29.106348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:29.135863   45025 cri.go:89] found id: ""
	I1211 00:19:29.135876   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.135883   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:29.135888   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:29.135948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:29.162996   45025 cri.go:89] found id: ""
	I1211 00:19:29.163011   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.163018   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:29.163024   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:29.163104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:29.189722   45025 cri.go:89] found id: ""
	I1211 00:19:29.189738   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.189745   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:29.189749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:29.189834   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:29.215022   45025 cri.go:89] found id: ""
	I1211 00:19:29.215036   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.215042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:29.215047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:29.215106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:29.240657   45025 cri.go:89] found id: ""
	I1211 00:19:29.240671   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.240679   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:29.240684   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:29.240744   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:29.265406   45025 cri.go:89] found id: ""
	I1211 00:19:29.265420   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.265427   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:29.265432   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:29.265488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:29.289115   45025 cri.go:89] found id: ""
	I1211 00:19:29.289128   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.289136   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:29.289143   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:29.289154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:29.316627   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:29.316646   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:29.381873   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:29.381892   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:29.392836   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:29.392852   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:29.474052   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.474062   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:29.474072   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.041538   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:32.052288   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:32.052353   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:32.078058   45025 cri.go:89] found id: ""
	I1211 00:19:32.078071   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.078078   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:32.078084   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:32.078143   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:32.104226   45025 cri.go:89] found id: ""
	I1211 00:19:32.104240   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.104251   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:32.104256   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:32.104315   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:32.130104   45025 cri.go:89] found id: ""
	I1211 00:19:32.130123   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.130130   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:32.130135   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:32.130196   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:32.156116   45025 cri.go:89] found id: ""
	I1211 00:19:32.156131   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.156138   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:32.156143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:32.156204   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:32.182027   45025 cri.go:89] found id: ""
	I1211 00:19:32.182039   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.182046   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:32.182051   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:32.182119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:32.206462   45025 cri.go:89] found id: ""
	I1211 00:19:32.206476   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.206483   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:32.206488   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:32.206553   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:32.230714   45025 cri.go:89] found id: ""
	I1211 00:19:32.230727   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.230734   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:32.230757   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:32.230773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:32.295411   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:32.295430   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:32.306690   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:32.306705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:32.373425   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:32.373435   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:32.373446   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.441247   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:32.441264   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:34.988442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:34.998718   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:34.998785   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:35.036207   45025 cri.go:89] found id: ""
	I1211 00:19:35.036221   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.036231   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:35.036236   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:35.036298   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:35.062611   45025 cri.go:89] found id: ""
	I1211 00:19:35.062624   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.062631   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:35.062636   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:35.062692   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:35.089089   45025 cri.go:89] found id: ""
	I1211 00:19:35.089102   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.089109   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:35.089115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:35.089177   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:35.116537   45025 cri.go:89] found id: ""
	I1211 00:19:35.116550   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.116558   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:35.116564   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:35.116625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:35.141369   45025 cri.go:89] found id: ""
	I1211 00:19:35.141383   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.141390   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:35.141396   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:35.141464   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:35.167717   45025 cri.go:89] found id: ""
	I1211 00:19:35.167731   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.167738   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:35.167746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:35.167805   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:35.193275   45025 cri.go:89] found id: ""
	I1211 00:19:35.193288   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.193295   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:35.193303   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:35.193313   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:35.223396   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:35.223412   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:35.291423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:35.291442   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:35.302744   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:35.302760   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:35.366712   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:35.366722   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:35.366732   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:37.940570   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:37.951183   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:37.951244   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:37.977384   45025 cri.go:89] found id: ""
	I1211 00:19:37.977412   45025 logs.go:282] 0 containers: []
	W1211 00:19:37.977419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:37.977425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:37.977489   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:38.002327   45025 cri.go:89] found id: ""
	I1211 00:19:38.002341   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.002349   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:38.002354   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:38.002433   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:38.032932   45025 cri.go:89] found id: ""
	I1211 00:19:38.032947   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.032955   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:38.032960   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:38.033023   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:38.060494   45025 cri.go:89] found id: ""
	I1211 00:19:38.060508   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.060516   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:38.060522   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:38.060584   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:38.090424   45025 cri.go:89] found id: ""
	I1211 00:19:38.090438   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.090445   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:38.090450   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:38.090511   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:38.117237   45025 cri.go:89] found id: ""
	I1211 00:19:38.117250   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.117258   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:38.117268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:38.117330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:38.144173   45025 cri.go:89] found id: ""
	I1211 00:19:38.144187   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.144195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:38.144203   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:38.144213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:38.213450   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:38.213474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:38.224711   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:38.224727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:38.292623   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:38.292634   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:38.292644   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:38.360121   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:38.360139   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:40.897394   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:40.907308   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:40.907368   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:40.935845   45025 cri.go:89] found id: ""
	I1211 00:19:40.935861   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.935868   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:40.935874   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:40.935936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:40.961885   45025 cri.go:89] found id: ""
	I1211 00:19:40.961899   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.961906   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:40.961911   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:40.961972   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:40.992115   45025 cri.go:89] found id: ""
	I1211 00:19:40.992129   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.992136   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:40.992141   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:40.992199   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:41.017243   45025 cri.go:89] found id: ""
	I1211 00:19:41.017259   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.017269   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:41.017274   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:41.017355   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:41.046002   45025 cri.go:89] found id: ""
	I1211 00:19:41.046016   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.046022   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:41.046027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:41.046097   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:41.072198   45025 cri.go:89] found id: ""
	I1211 00:19:41.072212   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.072220   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:41.072225   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:41.072297   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:41.097305   45025 cri.go:89] found id: ""
	I1211 00:19:41.097319   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.097326   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:41.097352   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:41.097363   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:41.163075   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:41.163095   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:41.174199   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:41.174214   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:41.239512   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:41.239535   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:41.239556   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:41.311901   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:41.311918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:43.842688   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:43.853001   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:43.853061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:43.877321   45025 cri.go:89] found id: ""
	I1211 00:19:43.877335   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.877342   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:43.877347   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:43.877403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:43.905861   45025 cri.go:89] found id: ""
	I1211 00:19:43.905874   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.905882   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:43.905887   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:43.905948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:43.931275   45025 cri.go:89] found id: ""
	I1211 00:19:43.931289   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.931309   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:43.931315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:43.931383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:43.957472   45025 cri.go:89] found id: ""
	I1211 00:19:43.957485   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.957492   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:43.957497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:43.957556   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:43.987995   45025 cri.go:89] found id: ""
	I1211 00:19:43.988009   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.988016   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:43.988022   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:43.988082   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:44.015918   45025 cri.go:89] found id: ""
	I1211 00:19:44.015934   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.015942   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:44.015948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:44.016028   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:44.044784   45025 cri.go:89] found id: ""
	I1211 00:19:44.044797   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.044804   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:44.044812   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:44.044825   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:44.111423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:44.111440   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:44.122746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:44.122766   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:44.196525   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:44.196536   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:44.196547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:44.264322   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:44.264340   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:46.797073   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:46.807248   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:46.807312   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:46.833629   45025 cri.go:89] found id: ""
	I1211 00:19:46.833643   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.833650   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:46.833656   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:46.833722   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:46.860316   45025 cri.go:89] found id: ""
	I1211 00:19:46.860329   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.860337   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:46.860342   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:46.860403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:46.886240   45025 cri.go:89] found id: ""
	I1211 00:19:46.886253   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.886261   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:46.886265   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:46.886324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:46.911538   45025 cri.go:89] found id: ""
	I1211 00:19:46.911552   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.911559   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:46.911565   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:46.911625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:46.938014   45025 cri.go:89] found id: ""
	I1211 00:19:46.938029   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.938036   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:46.938041   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:46.938105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:46.965253   45025 cri.go:89] found id: ""
	I1211 00:19:46.965267   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.965274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:46.965279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:46.965339   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:46.991686   45025 cri.go:89] found id: ""
	I1211 00:19:46.991699   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.991706   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:46.991714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:46.991727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:47.057610   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:47.057627   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:47.069235   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:47.069251   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:47.137186   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:47.137197   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:47.137220   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:47.206375   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:47.206397   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:49.735135   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:49.745127   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:49.745191   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:49.770237   45025 cri.go:89] found id: ""
	I1211 00:19:49.770250   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.770257   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:49.770262   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:49.770319   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:49.795789   45025 cri.go:89] found id: ""
	I1211 00:19:49.795803   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.795810   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:49.795815   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:49.795872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:49.825306   45025 cri.go:89] found id: ""
	I1211 00:19:49.825319   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.825326   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:49.825331   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:49.825388   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:49.855190   45025 cri.go:89] found id: ""
	I1211 00:19:49.855204   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.855211   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:49.855216   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:49.855281   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:49.881199   45025 cri.go:89] found id: ""
	I1211 00:19:49.881212   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.881219   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:49.881224   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:49.881280   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:49.906616   45025 cri.go:89] found id: ""
	I1211 00:19:49.906629   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.906636   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:49.906641   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:49.906698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:49.933814   45025 cri.go:89] found id: ""
	I1211 00:19:49.933828   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.933835   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:49.933842   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:49.933859   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:49.944994   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:49.945009   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:50.007164   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:50.007174   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:50.007184   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:50.077454   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:50.077472   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:50.110740   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:50.110757   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.683928   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:52.694104   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:52.694167   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:52.725399   45025 cri.go:89] found id: ""
	I1211 00:19:52.725413   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.725420   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:52.725425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:52.725483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:52.751850   45025 cri.go:89] found id: ""
	I1211 00:19:52.751863   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.751870   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:52.751875   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:52.751937   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:52.780571   45025 cri.go:89] found id: ""
	I1211 00:19:52.780584   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.780591   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:52.780595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:52.780653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:52.809728   45025 cri.go:89] found id: ""
	I1211 00:19:52.809741   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.809748   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:52.809753   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:52.809808   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:52.834891   45025 cri.go:89] found id: ""
	I1211 00:19:52.834904   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.834910   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:52.834915   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:52.835007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:52.861606   45025 cri.go:89] found id: ""
	I1211 00:19:52.861619   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.861626   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:52.861631   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:52.861688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:52.888101   45025 cri.go:89] found id: ""
	I1211 00:19:52.888115   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.888122   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:52.888130   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:52.888140   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.953090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:52.953108   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:52.964419   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:52.964435   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:53.034074   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:53.034091   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:53.034102   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:53.105399   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:53.105417   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.638422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:55.648339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:55.648396   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:55.678848   45025 cri.go:89] found id: ""
	I1211 00:19:55.678868   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.678876   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:55.678884   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:55.678953   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:55.718935   45025 cri.go:89] found id: ""
	I1211 00:19:55.718959   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.718987   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:55.718992   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:55.719061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:55.743738   45025 cri.go:89] found id: ""
	I1211 00:19:55.743751   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.743758   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:55.743763   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:55.743822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:55.769117   45025 cri.go:89] found id: ""
	I1211 00:19:55.769130   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.769137   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:55.769143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:55.769207   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:55.795500   45025 cri.go:89] found id: ""
	I1211 00:19:55.795529   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.795537   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:55.795542   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:55.795611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:55.824959   45025 cri.go:89] found id: ""
	I1211 00:19:55.824972   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.824979   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:55.824984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:55.825042   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:55.850737   45025 cri.go:89] found id: ""
	I1211 00:19:55.850750   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.850768   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:55.850776   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:55.850787   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.878584   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:55.878600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:55.943684   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:55.943701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:55.954898   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:55.954914   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:56.024872   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:56.024883   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:56.024893   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.594636   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:58.605403   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:58.605467   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:58.637164   45025 cri.go:89] found id: ""
	I1211 00:19:58.637178   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.637189   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:58.637194   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:58.637252   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:58.682644   45025 cri.go:89] found id: ""
	I1211 00:19:58.682657   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.682664   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:58.682672   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:58.682728   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:58.714474   45025 cri.go:89] found id: ""
	I1211 00:19:58.714488   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.714495   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:58.714500   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:58.714558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:58.745457   45025 cri.go:89] found id: ""
	I1211 00:19:58.745470   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.745484   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:58.745489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:58.745545   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:58.771678   45025 cri.go:89] found id: ""
	I1211 00:19:58.771691   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.771704   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:58.771710   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:58.771770   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:58.796493   45025 cri.go:89] found id: ""
	I1211 00:19:58.796507   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.796514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:58.796519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:58.796576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:58.821870   45025 cri.go:89] found id: ""
	I1211 00:19:58.821884   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.821892   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:58.821899   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:58.821909   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.894510   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:58.894537   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:58.927576   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:58.927595   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:58.994438   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:58.994455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:59.005360   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:59.005377   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:59.073100   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.573622   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:01.584703   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:01.584773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:01.612873   45025 cri.go:89] found id: ""
	I1211 00:20:01.612888   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.612895   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:01.612901   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:01.612964   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:01.641246   45025 cri.go:89] found id: ""
	I1211 00:20:01.641259   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.641267   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:01.641272   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:01.641330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:01.670560   45025 cri.go:89] found id: ""
	I1211 00:20:01.670574   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.670582   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:01.670587   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:01.670652   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:01.697783   45025 cri.go:89] found id: ""
	I1211 00:20:01.697797   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.697804   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:01.697809   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:01.697870   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:01.724991   45025 cri.go:89] found id: ""
	I1211 00:20:01.725005   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.725013   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:01.725019   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:01.725078   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:01.751948   45025 cri.go:89] found id: ""
	I1211 00:20:01.751961   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.751969   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:01.751976   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:01.752036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:01.782191   45025 cri.go:89] found id: ""
	I1211 00:20:01.782204   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.782211   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:01.782218   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:01.782228   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:01.849183   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:01.849203   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:01.863105   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:01.863127   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:01.948480   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.948490   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:01.948501   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:02.031526   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:02.031546   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.563706   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:04.573944   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:04.573999   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:04.604222   45025 cri.go:89] found id: ""
	I1211 00:20:04.604235   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.604242   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:04.604247   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:04.604308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:04.633340   45025 cri.go:89] found id: ""
	I1211 00:20:04.633353   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.633361   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:04.633365   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:04.633427   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:04.663258   45025 cri.go:89] found id: ""
	I1211 00:20:04.663289   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.663297   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:04.663302   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:04.663373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:04.690031   45025 cri.go:89] found id: ""
	I1211 00:20:04.690044   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.690051   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:04.690056   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:04.690112   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:04.716219   45025 cri.go:89] found id: ""
	I1211 00:20:04.716232   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.716240   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:04.716256   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:04.716317   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:04.742460   45025 cri.go:89] found id: ""
	I1211 00:20:04.742474   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.742481   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:04.742497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:04.742564   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:04.774107   45025 cri.go:89] found id: ""
	I1211 00:20:04.774121   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.774128   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:04.774136   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:04.774146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.806436   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:04.806453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:04.872547   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:04.872566   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:04.884075   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:04.884092   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:04.982628   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:04.982638   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:04.982650   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.551877   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:07.561860   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:07.561924   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:07.586162   45025 cri.go:89] found id: ""
	I1211 00:20:07.586175   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.586192   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:07.586198   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:07.586254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:07.611295   45025 cri.go:89] found id: ""
	I1211 00:20:07.611309   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.611316   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:07.611321   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:07.611377   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:07.637224   45025 cri.go:89] found id: ""
	I1211 00:20:07.637237   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.637245   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:07.637249   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:07.637306   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:07.666366   45025 cri.go:89] found id: ""
	I1211 00:20:07.666379   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.666386   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:07.666391   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:07.666451   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:07.691800   45025 cri.go:89] found id: ""
	I1211 00:20:07.691814   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.691822   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:07.691827   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:07.691885   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:07.717290   45025 cri.go:89] found id: ""
	I1211 00:20:07.717304   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.717321   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:07.717326   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:07.717382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:07.747011   45025 cri.go:89] found id: ""
	I1211 00:20:07.747024   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.747031   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:07.747039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:07.747048   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.816300   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:07.816318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:07.850783   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:07.850798   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:07.920354   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:07.920371   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:07.932012   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:07.932027   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:07.996529   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.496978   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:10.507125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:10.507193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:10.532780   45025 cri.go:89] found id: ""
	I1211 00:20:10.532794   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.532801   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:10.532807   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:10.532863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:10.558194   45025 cri.go:89] found id: ""
	I1211 00:20:10.558207   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.558214   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:10.558219   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:10.558277   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:10.583482   45025 cri.go:89] found id: ""
	I1211 00:20:10.583496   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.583503   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:10.583508   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:10.583566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:10.608826   45025 cri.go:89] found id: ""
	I1211 00:20:10.608840   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.608847   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:10.608851   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:10.608910   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:10.637533   45025 cri.go:89] found id: ""
	I1211 00:20:10.637548   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.637554   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:10.637559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:10.637620   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:10.662448   45025 cri.go:89] found id: ""
	I1211 00:20:10.662463   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.662471   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:10.662478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:10.662535   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:10.688164   45025 cri.go:89] found id: ""
	I1211 00:20:10.688187   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.688195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:10.688203   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:10.688213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:10.718946   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:10.718981   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:10.783972   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:10.783992   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:10.795392   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:10.795408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:10.862892   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.862901   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:10.862911   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.437541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:13.447617   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:13.447679   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:13.473117   45025 cri.go:89] found id: ""
	I1211 00:20:13.473131   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.473139   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:13.473144   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:13.473200   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:13.498616   45025 cri.go:89] found id: ""
	I1211 00:20:13.498629   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.498636   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:13.498641   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:13.498698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:13.525802   45025 cri.go:89] found id: ""
	I1211 00:20:13.525824   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.525832   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:13.525836   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:13.525904   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:13.552063   45025 cri.go:89] found id: ""
	I1211 00:20:13.552077   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.552084   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:13.552092   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:13.552153   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:13.576789   45025 cri.go:89] found id: ""
	I1211 00:20:13.576802   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.576809   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:13.576816   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:13.576872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:13.602028   45025 cri.go:89] found id: ""
	I1211 00:20:13.602042   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.602059   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:13.602065   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:13.602120   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:13.629268   45025 cri.go:89] found id: ""
	I1211 00:20:13.629282   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.629299   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:13.629307   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:13.629318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:13.694395   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:13.694413   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:13.705346   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:13.705362   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:13.771138   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:13.771148   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:13.771158   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.842879   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:13.842896   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:16.379425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:16.389574   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:16.389639   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:16.414634   45025 cri.go:89] found id: ""
	I1211 00:20:16.414647   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.414654   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:16.414659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:16.414721   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:16.441274   45025 cri.go:89] found id: ""
	I1211 00:20:16.441287   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.441293   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:16.441298   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:16.441352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:16.466318   45025 cri.go:89] found id: ""
	I1211 00:20:16.466331   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.466338   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:16.466343   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:16.466399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:16.492814   45025 cri.go:89] found id: ""
	I1211 00:20:16.492827   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.492834   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:16.492839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:16.492894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:16.518104   45025 cri.go:89] found id: ""
	I1211 00:20:16.518117   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.518125   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:16.518130   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:16.518193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:16.543245   45025 cri.go:89] found id: ""
	I1211 00:20:16.543260   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.543267   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:16.543272   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:16.543331   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:16.567767   45025 cri.go:89] found id: ""
	I1211 00:20:16.567781   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.567788   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:16.567795   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:16.567806   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:16.635880   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:16.635897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:16.647253   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:16.647269   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:16.711132   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:16.711143   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:16.711154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:16.781461   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:16.781479   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.312031   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:19.322411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:19.322469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:19.349102   45025 cri.go:89] found id: ""
	I1211 00:20:19.349116   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.349124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:19.349129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:19.349190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:19.373803   45025 cri.go:89] found id: ""
	I1211 00:20:19.373818   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.373825   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:19.373830   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:19.373891   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:19.402187   45025 cri.go:89] found id: ""
	I1211 00:20:19.402201   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.402208   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:19.402213   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:19.402274   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:19.427606   45025 cri.go:89] found id: ""
	I1211 00:20:19.427620   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.427628   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:19.427633   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:19.427693   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:19.452647   45025 cri.go:89] found id: ""
	I1211 00:20:19.452660   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.452667   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:19.452671   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:19.452732   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:19.482184   45025 cri.go:89] found id: ""
	I1211 00:20:19.482198   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.482205   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:19.482211   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:19.482266   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:19.508334   45025 cri.go:89] found id: ""
	I1211 00:20:19.508348   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.508355   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:19.508369   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:19.508379   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:19.582679   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:19.582703   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.613878   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:19.613897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:19.688185   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:19.688206   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:19.699902   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:19.699917   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:19.768799   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.269027   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:22.278950   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:22.279030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:22.303632   45025 cri.go:89] found id: ""
	I1211 00:20:22.303646   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.303653   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:22.303659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:22.303714   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:22.329589   45025 cri.go:89] found id: ""
	I1211 00:20:22.329602   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.329647   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:22.329653   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:22.329707   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:22.359724   45025 cri.go:89] found id: ""
	I1211 00:20:22.359737   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.359744   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:22.359749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:22.359806   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:22.385684   45025 cri.go:89] found id: ""
	I1211 00:20:22.385697   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.385704   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:22.385709   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:22.385768   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:22.411515   45025 cri.go:89] found id: ""
	I1211 00:20:22.411529   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.411536   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:22.411541   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:22.411601   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:22.437841   45025 cri.go:89] found id: ""
	I1211 00:20:22.437858   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.437865   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:22.437870   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:22.437926   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:22.462799   45025 cri.go:89] found id: ""
	I1211 00:20:22.462812   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.462819   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:22.462830   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:22.462840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:22.530683   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:22.530700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:22.541777   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:22.541792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:22.606464   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.606473   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:22.606484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:22.675683   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:22.675704   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:25.205679   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:25.215714   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:25.215772   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:25.240624   45025 cri.go:89] found id: ""
	I1211 00:20:25.240637   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.240644   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:25.240650   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:25.240704   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:25.266729   45025 cri.go:89] found id: ""
	I1211 00:20:25.266743   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.266761   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:25.266766   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:25.266833   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:25.292270   45025 cri.go:89] found id: ""
	I1211 00:20:25.292284   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.292291   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:25.292296   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:25.292352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:25.316988   45025 cri.go:89] found id: ""
	I1211 00:20:25.317013   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.317021   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:25.317027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:25.317094   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:25.342079   45025 cri.go:89] found id: ""
	I1211 00:20:25.342092   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.342100   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:25.342105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:25.342166   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:25.369363   45025 cri.go:89] found id: ""
	I1211 00:20:25.369376   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.369383   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:25.369388   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:25.369445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:25.395141   45025 cri.go:89] found id: ""
	I1211 00:20:25.395155   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.395166   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:25.395173   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:25.395183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:25.459743   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:25.459761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:25.470311   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:25.470325   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:25.537864   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:25.537874   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:25.537884   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:25.605782   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:25.605800   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:28.140709   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:28.152210   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:28.152270   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:28.192161   45025 cri.go:89] found id: ""
	I1211 00:20:28.192175   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.192182   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:28.192188   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:28.192254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:28.226107   45025 cri.go:89] found id: ""
	I1211 00:20:28.226121   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.226128   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:28.226133   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:28.226190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:28.252351   45025 cri.go:89] found id: ""
	I1211 00:20:28.252364   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.252371   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:28.252376   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:28.252437   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:28.277856   45025 cri.go:89] found id: ""
	I1211 00:20:28.277869   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.277876   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:28.277882   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:28.277942   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:28.303425   45025 cri.go:89] found id: ""
	I1211 00:20:28.303442   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.303449   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:28.303454   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:28.303533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:28.327952   45025 cri.go:89] found id: ""
	I1211 00:20:28.327965   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.327973   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:28.327978   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:28.328036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:28.352541   45025 cri.go:89] found id: ""
	I1211 00:20:28.352556   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.352563   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:28.352571   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:28.352581   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:28.417587   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:28.417606   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:28.428990   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:28.429005   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:28.493232   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:28.493242   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:28.493252   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:28.561239   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:28.561257   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.093955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:31.104422   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:31.104484   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:31.130996   45025 cri.go:89] found id: ""
	I1211 00:20:31.131011   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.131018   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:31.131023   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:31.131088   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:31.170443   45025 cri.go:89] found id: ""
	I1211 00:20:31.170457   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.170465   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:31.170470   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:31.170531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:31.204748   45025 cri.go:89] found id: ""
	I1211 00:20:31.204769   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.204777   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:31.204781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:31.204846   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:31.235573   45025 cri.go:89] found id: ""
	I1211 00:20:31.235587   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.235594   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:31.235606   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:31.235664   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:31.260669   45025 cri.go:89] found id: ""
	I1211 00:20:31.260683   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.260690   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:31.260695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:31.260753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:31.286253   45025 cri.go:89] found id: ""
	I1211 00:20:31.286267   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.286274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:31.286279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:31.286338   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:31.313885   45025 cri.go:89] found id: ""
	I1211 00:20:31.313903   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.313910   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:31.313917   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:31.313928   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:31.376250   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:31.376260   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:31.376271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:31.445930   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:31.445948   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.477909   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:31.477923   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:31.547558   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:31.547575   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.060343   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:34.071407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:34.071468   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:34.097367   45025 cri.go:89] found id: ""
	I1211 00:20:34.097381   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.097389   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:34.097394   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:34.097455   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:34.125233   45025 cri.go:89] found id: ""
	I1211 00:20:34.125246   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.125253   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:34.125258   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:34.125313   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:34.152711   45025 cri.go:89] found id: ""
	I1211 00:20:34.152724   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.152731   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:34.152735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:34.152797   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:34.183533   45025 cri.go:89] found id: ""
	I1211 00:20:34.183547   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.183553   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:34.183559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:34.183627   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:34.212367   45025 cri.go:89] found id: ""
	I1211 00:20:34.212379   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.212386   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:34.212392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:34.212450   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:34.239991   45025 cri.go:89] found id: ""
	I1211 00:20:34.240005   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.240012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:34.240017   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:34.240084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:34.265795   45025 cri.go:89] found id: ""
	I1211 00:20:34.265809   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.265816   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:34.265823   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:34.265833   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:34.335452   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:34.335471   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:34.366714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:34.366729   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:34.434761   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:34.434779   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.445767   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:34.445782   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:34.513054   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.014301   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:37.029619   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:37.029688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:37.061510   45025 cri.go:89] found id: ""
	I1211 00:20:37.061525   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.061533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:37.061539   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:37.061597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:37.087429   45025 cri.go:89] found id: ""
	I1211 00:20:37.087442   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.087449   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:37.087454   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:37.087513   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:37.113865   45025 cri.go:89] found id: ""
	I1211 00:20:37.113878   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.113885   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:37.113890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:37.113951   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:37.139634   45025 cri.go:89] found id: ""
	I1211 00:20:37.139647   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.139655   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:37.139659   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:37.139723   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:37.177513   45025 cri.go:89] found id: ""
	I1211 00:20:37.177527   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.177535   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:37.177540   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:37.177599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:37.207209   45025 cri.go:89] found id: ""
	I1211 00:20:37.207223   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.207230   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:37.207235   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:37.207291   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:37.235860   45025 cri.go:89] found id: ""
	I1211 00:20:37.235874   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.235880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:37.235888   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:37.235898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:37.302242   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:37.302260   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:37.313364   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:37.313380   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:37.383109   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.383119   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:37.383134   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:37.452480   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:37.452497   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:39.981534   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:39.992011   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:39.992074   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:40.037108   45025 cri.go:89] found id: ""
	I1211 00:20:40.037123   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.037131   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:40.037137   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:40.037205   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:40.073935   45025 cri.go:89] found id: ""
	I1211 00:20:40.073950   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.073958   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:40.073963   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:40.074024   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:40.103233   45025 cri.go:89] found id: ""
	I1211 00:20:40.103247   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.103255   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:40.103260   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:40.103324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:40.130384   45025 cri.go:89] found id: ""
	I1211 00:20:40.130398   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.130405   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:40.130411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:40.130482   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:40.168123   45025 cri.go:89] found id: ""
	I1211 00:20:40.168137   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.168143   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:40.168149   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:40.168209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:40.206729   45025 cri.go:89] found id: ""
	I1211 00:20:40.206743   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.206750   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:40.206755   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:40.206814   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:40.237917   45025 cri.go:89] found id: ""
	I1211 00:20:40.237930   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.237937   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:40.237945   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:40.237954   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:40.306231   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:40.306249   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:40.335237   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:40.335256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:40.407102   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:40.407124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:40.418948   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:40.418987   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:40.487059   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:42.987371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:42.997627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:42.997687   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:43.034834   45025 cri.go:89] found id: ""
	I1211 00:20:43.034847   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.034854   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:43.034858   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:43.034917   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:43.061014   45025 cri.go:89] found id: ""
	I1211 00:20:43.061028   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.061035   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:43.061040   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:43.061111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:43.086728   45025 cri.go:89] found id: ""
	I1211 00:20:43.086742   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.086749   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:43.086754   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:43.086815   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:43.112537   45025 cri.go:89] found id: ""
	I1211 00:20:43.112551   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.112557   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:43.112563   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:43.112619   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:43.138331   45025 cri.go:89] found id: ""
	I1211 00:20:43.138358   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.138365   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:43.138370   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:43.138440   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:43.177883   45025 cri.go:89] found id: ""
	I1211 00:20:43.177895   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.177902   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:43.177908   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:43.177976   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:43.208963   45025 cri.go:89] found id: ""
	I1211 00:20:43.208976   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.208984   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:43.208991   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:43.209001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:43.276100   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:43.276119   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:43.287251   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:43.287266   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:43.358374   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:43.358389   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:43.358399   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:43.430845   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:43.430863   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:45.960980   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:45.971128   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:45.971189   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:45.997483   45025 cri.go:89] found id: ""
	I1211 00:20:45.997497   45025 logs.go:282] 0 containers: []
	W1211 00:20:45.997504   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:45.997509   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:45.997566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:46.030243   45025 cri.go:89] found id: ""
	I1211 00:20:46.030257   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.030265   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:46.030280   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:46.030341   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:46.057812   45025 cri.go:89] found id: ""
	I1211 00:20:46.057826   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.057834   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:46.057839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:46.057896   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:46.094313   45025 cri.go:89] found id: ""
	I1211 00:20:46.094326   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.094334   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:46.094339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:46.094403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:46.120781   45025 cri.go:89] found id: ""
	I1211 00:20:46.120796   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.120803   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:46.120808   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:46.120867   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:46.153078   45025 cri.go:89] found id: ""
	I1211 00:20:46.153091   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.153099   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:46.153105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:46.153164   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:46.184025   45025 cri.go:89] found id: ""
	I1211 00:20:46.184038   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.184045   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:46.184052   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:46.184065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:46.195376   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:46.195391   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:46.264561   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:46.264571   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:46.264583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:46.334575   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:46.334592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:46.365686   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:46.365701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:48.932730   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:48.943221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:48.943289   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:48.970754   45025 cri.go:89] found id: ""
	I1211 00:20:48.970769   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.970775   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:48.970781   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:48.970851   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:48.998179   45025 cri.go:89] found id: ""
	I1211 00:20:48.998193   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.998200   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:48.998205   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:48.998265   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:49.027459   45025 cri.go:89] found id: ""
	I1211 00:20:49.027472   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.027485   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:49.027490   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:49.027554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:49.053666   45025 cri.go:89] found id: ""
	I1211 00:20:49.053693   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.053700   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:49.053705   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:49.053773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:49.080140   45025 cri.go:89] found id: ""
	I1211 00:20:49.080155   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.080162   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:49.080167   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:49.080223   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:49.106258   45025 cri.go:89] found id: ""
	I1211 00:20:49.106281   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.106289   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:49.106294   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:49.106362   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:49.131929   45025 cri.go:89] found id: ""
	I1211 00:20:49.131952   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.131960   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:49.131967   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:49.131978   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:49.216291   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:49.216315   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:49.247289   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:49.247308   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:49.319005   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:49.319026   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:49.330154   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:49.330171   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:49.399415   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:51.899678   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:51.910510   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:51.910571   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:51.941358   45025 cri.go:89] found id: ""
	I1211 00:20:51.941372   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.941379   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:51.941384   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:51.941441   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:51.972273   45025 cri.go:89] found id: ""
	I1211 00:20:51.972287   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.972295   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:51.972300   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:51.972357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:51.998172   45025 cri.go:89] found id: ""
	I1211 00:20:51.998184   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.998191   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:51.998197   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:51.998256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:52.028439   45025 cri.go:89] found id: ""
	I1211 00:20:52.028453   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.028460   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:52.028465   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:52.028526   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:52.060485   45025 cri.go:89] found id: ""
	I1211 00:20:52.060500   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.060508   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:52.060513   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:52.060574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:52.093990   45025 cri.go:89] found id: ""
	I1211 00:20:52.094005   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.094012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:52.094018   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:52.094084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:52.122577   45025 cri.go:89] found id: ""
	I1211 00:20:52.122592   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.122599   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:52.122606   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:52.122624   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:52.191378   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:52.191396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:52.203404   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:52.203421   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:52.272572   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:52.272582   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:52.272592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:52.340655   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:52.340672   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:54.871996   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:54.882238   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:54.882299   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:54.908417   45025 cri.go:89] found id: ""
	I1211 00:20:54.908430   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.908437   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:54.908442   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:54.908512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:54.937462   45025 cri.go:89] found id: ""
	I1211 00:20:54.937475   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.937482   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:54.937487   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:54.937547   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:54.965546   45025 cri.go:89] found id: ""
	I1211 00:20:54.965560   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.965567   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:54.965572   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:54.965629   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:54.991381   45025 cri.go:89] found id: ""
	I1211 00:20:54.991395   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.991403   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:54.991407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:54.991469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:55.023225   45025 cri.go:89] found id: ""
	I1211 00:20:55.023243   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.023251   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:55.023257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:55.023340   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:55.069033   45025 cri.go:89] found id: ""
	I1211 00:20:55.069049   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.069056   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:55.069062   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:55.069130   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:55.104401   45025 cri.go:89] found id: ""
	I1211 00:20:55.104417   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.104424   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:55.104432   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:55.104444   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:55.117919   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:55.117939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:55.207253   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:55.207264   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:55.207275   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:55.285978   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:55.286001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:55.318311   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:55.318327   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:57.883510   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:57.893407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:57.893478   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:57.918657   45025 cri.go:89] found id: ""
	I1211 00:20:57.918670   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.918677   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:57.918684   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:57.918739   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:57.944248   45025 cri.go:89] found id: ""
	I1211 00:20:57.944261   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.944268   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:57.944274   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:57.944337   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:57.969321   45025 cri.go:89] found id: ""
	I1211 00:20:57.969335   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.969342   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:57.969347   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:57.969403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:57.994466   45025 cri.go:89] found id: ""
	I1211 00:20:57.994482   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.994490   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:57.994495   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:57.994554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:58.021937   45025 cri.go:89] found id: ""
	I1211 00:20:58.021954   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.021962   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:58.021967   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:58.022033   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:58.048826   45025 cri.go:89] found id: ""
	I1211 00:20:58.048840   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.048848   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:58.048854   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:58.048912   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:58.077218   45025 cri.go:89] found id: ""
	I1211 00:20:58.077231   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.077239   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:58.077246   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:58.077256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:58.145681   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:58.145698   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:58.191796   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:58.191814   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:58.268737   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:58.268756   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:58.280057   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:58.280074   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:58.347775   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:00.848653   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:00.859447   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:00.859507   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:00.885107   45025 cri.go:89] found id: ""
	I1211 00:21:00.885123   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.885130   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:00.885136   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:00.885195   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:00.916160   45025 cri.go:89] found id: ""
	I1211 00:21:00.916174   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.916181   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:00.916186   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:00.916242   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:00.941904   45025 cri.go:89] found id: ""
	I1211 00:21:00.941918   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.941926   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:00.941931   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:00.941996   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:00.969553   45025 cri.go:89] found id: ""
	I1211 00:21:00.969566   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.969573   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:00.969579   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:00.969640   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:00.995856   45025 cri.go:89] found id: ""
	I1211 00:21:00.995869   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.995876   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:00.995881   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:00.995936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:01.023643   45025 cri.go:89] found id: ""
	I1211 00:21:01.023672   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.023679   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:01.023685   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:01.023753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:01.049959   45025 cri.go:89] found id: ""
	I1211 00:21:01.049972   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.049979   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:01.049986   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:01.049996   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:01.117206   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:01.117224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:01.129158   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:01.129174   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:01.221837   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:01.221848   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:01.221858   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:01.292030   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:01.292052   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:03.824471   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:03.834984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:03.835048   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:03.865620   45025 cri.go:89] found id: ""
	I1211 00:21:03.865633   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.865640   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:03.865646   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:03.865706   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:03.894960   45025 cri.go:89] found id: ""
	I1211 00:21:03.895000   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.895012   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:03.895018   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:03.895093   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:03.922002   45025 cri.go:89] found id: ""
	I1211 00:21:03.922016   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.922033   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:03.922039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:03.922114   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:03.949011   45025 cri.go:89] found id: ""
	I1211 00:21:03.949025   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.949032   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:03.949037   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:03.949104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:03.979941   45025 cri.go:89] found id: ""
	I1211 00:21:03.979955   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.979983   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:03.979988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:03.980056   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:04.005356   45025 cri.go:89] found id: ""
	I1211 00:21:04.005379   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.005386   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:04.005392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:04.005498   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:04.036172   45025 cri.go:89] found id: ""
	I1211 00:21:04.036193   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.036201   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:04.036210   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:04.036224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:04.075735   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:04.075754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:04.141955   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:04.141976   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:04.154375   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:04.154390   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:04.236732   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:04.236744   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:04.236754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:06.812855   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:06.823280   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:06.823348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:06.849675   45025 cri.go:89] found id: ""
	I1211 00:21:06.849689   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.849696   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:06.849701   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:06.849760   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:06.876012   45025 cri.go:89] found id: ""
	I1211 00:21:06.876026   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.876033   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:06.876038   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:06.876095   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:06.901644   45025 cri.go:89] found id: ""
	I1211 00:21:06.901658   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.901664   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:06.901669   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:06.901726   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:06.926863   45025 cri.go:89] found id: ""
	I1211 00:21:06.926877   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.926885   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:06.926890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:06.926946   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:06.956891   45025 cri.go:89] found id: ""
	I1211 00:21:06.956905   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.956912   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:06.956917   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:06.956978   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:06.981741   45025 cri.go:89] found id: ""
	I1211 00:21:06.981754   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.981762   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:06.981767   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:06.981826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:07.007640   45025 cri.go:89] found id: ""
	I1211 00:21:07.007653   45025 logs.go:282] 0 containers: []
	W1211 00:21:07.007660   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:07.007666   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:07.007678   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:07.076566   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:07.076583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:07.087895   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:07.087910   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:07.159453   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:07.159463   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:07.159474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:07.242834   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:07.242853   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:09.772607   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:09.782749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:09.782809   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:09.809021   45025 cri.go:89] found id: ""
	I1211 00:21:09.809035   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.809042   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:09.809048   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:09.809106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:09.837599   45025 cri.go:89] found id: ""
	I1211 00:21:09.837612   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.837619   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:09.837624   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:09.837681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:09.865754   45025 cri.go:89] found id: ""
	I1211 00:21:09.865767   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.865775   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:09.865780   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:09.865841   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:09.890922   45025 cri.go:89] found id: ""
	I1211 00:21:09.890936   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.890943   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:09.890948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:09.891034   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:09.916087   45025 cri.go:89] found id: ""
	I1211 00:21:09.916100   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.916108   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:09.916113   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:09.916169   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:09.941494   45025 cri.go:89] found id: ""
	I1211 00:21:09.941507   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.941514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:09.941520   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:09.941574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:09.967438   45025 cri.go:89] found id: ""
	I1211 00:21:09.967452   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.967460   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:09.967467   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:09.967478   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:10.042566   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:10.042577   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:10.042589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:10.114716   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:10.114734   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:10.147711   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:10.147727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:10.216212   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:10.216230   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:12.728208   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:12.738793   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:12.738852   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:12.765512   45025 cri.go:89] found id: ""
	I1211 00:21:12.765527   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.765534   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:12.765540   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:12.765599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:12.792241   45025 cri.go:89] found id: ""
	I1211 00:21:12.792254   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.792261   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:12.792266   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:12.792326   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:12.821945   45025 cri.go:89] found id: ""
	I1211 00:21:12.821959   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.821966   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:12.821971   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:12.822029   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:12.847567   45025 cri.go:89] found id: ""
	I1211 00:21:12.847581   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.847588   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:12.847593   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:12.847649   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:12.873684   45025 cri.go:89] found id: ""
	I1211 00:21:12.873699   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.873706   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:12.873711   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:12.873769   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:12.899211   45025 cri.go:89] found id: ""
	I1211 00:21:12.899225   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.899233   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:12.899241   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:12.899301   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:12.925366   45025 cri.go:89] found id: ""
	I1211 00:21:12.925380   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.925387   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:12.925395   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:12.925408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:12.992650   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:12.992667   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:13.004006   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:13.004021   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:13.070046   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:13.070055   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:13.070065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:13.137969   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:13.137986   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:15.678794   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:15.688954   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:15.689022   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:15.714099   45025 cri.go:89] found id: ""
	I1211 00:21:15.714113   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.714120   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:15.714125   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:15.714190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:15.738722   45025 cri.go:89] found id: ""
	I1211 00:21:15.738735   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.738742   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:15.738747   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:15.738801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:15.764238   45025 cri.go:89] found id: ""
	I1211 00:21:15.764251   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.764258   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:15.764269   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:15.764330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:15.789987   45025 cri.go:89] found id: ""
	I1211 00:21:15.790000   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.790007   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:15.790012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:15.790066   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:15.815536   45025 cri.go:89] found id: ""
	I1211 00:21:15.815549   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.815556   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:15.815567   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:15.815626   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:15.840404   45025 cri.go:89] found id: ""
	I1211 00:21:15.840424   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.840433   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:15.840438   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:15.840497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:15.865028   45025 cri.go:89] found id: ""
	I1211 00:21:15.865041   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.865048   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:15.865054   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:15.865064   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:15.930832   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:15.930850   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:15.942270   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:15.942285   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:16.008579   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:16.008589   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:16.008600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:16.086023   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:16.086047   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.616564   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:18.627177   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:18.627235   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:18.655749   45025 cri.go:89] found id: ""
	I1211 00:21:18.655763   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.655771   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:18.655776   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:18.655838   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:18.685932   45025 cri.go:89] found id: ""
	I1211 00:21:18.685946   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.685953   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:18.685958   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:18.686019   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:18.713761   45025 cri.go:89] found id: ""
	I1211 00:21:18.713775   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.713783   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:18.713788   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:18.713847   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:18.740458   45025 cri.go:89] found id: ""
	I1211 00:21:18.740472   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.740480   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:18.740485   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:18.740540   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:18.766011   45025 cri.go:89] found id: ""
	I1211 00:21:18.766025   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.766032   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:18.766036   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:18.766092   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:18.791387   45025 cri.go:89] found id: ""
	I1211 00:21:18.791401   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.791409   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:18.791414   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:18.791471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:18.817326   45025 cri.go:89] found id: ""
	I1211 00:21:18.817340   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.817347   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:18.817354   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:18.817366   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:18.885570   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:18.885581   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:18.885592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:18.953656   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:18.953674   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.981613   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:18.981629   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:19.048252   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:19.048271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.561008   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:21.571125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:21.571184   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:21.597491   45025 cri.go:89] found id: ""
	I1211 00:21:21.597505   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.597512   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:21.597520   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:21.597576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:21.623022   45025 cri.go:89] found id: ""
	I1211 00:21:21.623040   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.623047   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:21.623052   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:21.623109   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:21.648127   45025 cri.go:89] found id: ""
	I1211 00:21:21.648141   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.648148   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:21.648154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:21.648212   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:21.673563   45025 cri.go:89] found id: ""
	I1211 00:21:21.673577   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.673584   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:21.673589   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:21.673646   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:21.701744   45025 cri.go:89] found id: ""
	I1211 00:21:21.701757   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.701764   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:21.701769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:21.701830   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:21.727163   45025 cri.go:89] found id: ""
	I1211 00:21:21.727177   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.727184   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:21.727189   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:21.727247   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:21.753680   45025 cri.go:89] found id: ""
	I1211 00:21:21.753694   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.753702   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:21.753709   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:21.753720   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.764845   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:21.764862   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:21.825854   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:21.825865   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:21.825877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:21.895180   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:21.895198   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:21.923512   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:21.923530   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.495120   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:24.505627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:24.505701   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:24.532103   45025 cri.go:89] found id: ""
	I1211 00:21:24.532117   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.532124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:24.532129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:24.532183   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:24.561426   45025 cri.go:89] found id: ""
	I1211 00:21:24.561439   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.561447   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:24.561451   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:24.561509   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:24.591493   45025 cri.go:89] found id: ""
	I1211 00:21:24.591506   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.591514   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:24.591519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:24.591582   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:24.618513   45025 cri.go:89] found id: ""
	I1211 00:21:24.618527   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.618534   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:24.618539   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:24.618596   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:24.644876   45025 cri.go:89] found id: ""
	I1211 00:21:24.644890   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.644899   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:24.644904   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:24.644963   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:24.674148   45025 cri.go:89] found id: ""
	I1211 00:21:24.674161   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.674168   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:24.674174   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:24.674236   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:24.700184   45025 cri.go:89] found id: ""
	I1211 00:21:24.700198   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.700205   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:24.700212   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:24.700222   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.765329   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:24.765346   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:24.776593   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:24.776608   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:24.844320   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:24.844329   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:24.844342   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:24.912094   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:24.912111   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.443355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:27.454562   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:27.454628   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:27.480512   45025 cri.go:89] found id: ""
	I1211 00:21:27.480526   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.480533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:27.480538   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:27.480604   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:27.507028   45025 cri.go:89] found id: ""
	I1211 00:21:27.507041   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.507049   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:27.507054   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:27.507111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:27.533346   45025 cri.go:89] found id: ""
	I1211 00:21:27.533360   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.533367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:27.533372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:27.533435   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:27.563021   45025 cri.go:89] found id: ""
	I1211 00:21:27.563034   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.563042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:27.563047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:27.563105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:27.587813   45025 cri.go:89] found id: ""
	I1211 00:21:27.587831   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.587838   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:27.587843   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:27.587900   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:27.616925   45025 cri.go:89] found id: ""
	I1211 00:21:27.616938   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.616945   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:27.616951   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:27.617007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:27.642256   45025 cri.go:89] found id: ""
	I1211 00:21:27.642269   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.642276   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:27.642283   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:27.642294   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:27.653306   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:27.653326   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:27.716428   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:27.716438   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:27.716455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:27.783513   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:27.783533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.814010   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:27.814025   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:30.382748   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:30.393371   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:30.393432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:30.427609   45025 cri.go:89] found id: ""
	I1211 00:21:30.427623   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.427629   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:30.427635   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:30.427696   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:30.457893   45025 cri.go:89] found id: ""
	I1211 00:21:30.457907   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.457913   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:30.457918   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:30.457980   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:30.492222   45025 cri.go:89] found id: ""
	I1211 00:21:30.492234   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.492241   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:30.492246   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:30.492303   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:30.521511   45025 cri.go:89] found id: ""
	I1211 00:21:30.521525   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.521532   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:30.521537   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:30.521597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:30.547821   45025 cri.go:89] found id: ""
	I1211 00:21:30.547835   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.547842   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:30.547847   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:30.547906   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:30.572652   45025 cri.go:89] found id: ""
	I1211 00:21:30.572666   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.572675   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:30.572681   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:30.572737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:30.601878   45025 cri.go:89] found id: ""
	I1211 00:21:30.601906   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.601914   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:30.601921   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:30.601932   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:30.613084   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:30.613100   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:30.683127   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:30.683136   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:30.683146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:30.750689   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:30.750707   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:30.784168   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:30.784183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.353720   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:33.363733   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:33.363790   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:33.391890   45025 cri.go:89] found id: ""
	I1211 00:21:33.391904   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.391911   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:33.391917   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:33.391984   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:33.423803   45025 cri.go:89] found id: ""
	I1211 00:21:33.423816   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.423823   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:33.423828   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:33.423889   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:33.458122   45025 cri.go:89] found id: ""
	I1211 00:21:33.458135   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.458142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:33.458147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:33.458206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:33.485705   45025 cri.go:89] found id: ""
	I1211 00:21:33.485718   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.485725   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:33.485730   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:33.485786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:33.513596   45025 cri.go:89] found id: ""
	I1211 00:21:33.513609   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.513617   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:33.513622   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:33.513681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:33.539390   45025 cri.go:89] found id: ""
	I1211 00:21:33.539403   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.539412   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:33.539418   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:33.539474   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:33.564837   45025 cri.go:89] found id: ""
	I1211 00:21:33.564849   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.564856   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:33.564863   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:33.564873   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.629883   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:33.629902   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:33.641102   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:33.641118   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:33.708725   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:33.708736   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:33.708746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:33.777920   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:33.777939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.306840   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:36.318198   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:36.318256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:36.347923   45025 cri.go:89] found id: ""
	I1211 00:21:36.347936   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.347943   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:36.347948   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:36.348003   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:36.372908   45025 cri.go:89] found id: ""
	I1211 00:21:36.372921   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.372928   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:36.372934   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:36.372994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:36.398449   45025 cri.go:89] found id: ""
	I1211 00:21:36.398462   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.398470   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:36.398478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:36.398533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:36.438503   45025 cri.go:89] found id: ""
	I1211 00:21:36.438516   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.438523   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:36.438528   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:36.438585   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:36.468232   45025 cri.go:89] found id: ""
	I1211 00:21:36.468245   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.468253   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:36.468257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:36.468318   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:36.494076   45025 cri.go:89] found id: ""
	I1211 00:21:36.494089   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.494096   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:36.494101   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:36.494168   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:36.521654   45025 cri.go:89] found id: ""
	I1211 00:21:36.521668   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.521676   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:36.521689   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:36.521700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:36.590822   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:36.590840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.620876   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:36.620891   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:36.689379   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:36.689396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:36.700340   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:36.700355   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:36.768766   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:39.270429   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:39.280501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:39.280558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:39.308182   45025 cri.go:89] found id: ""
	I1211 00:21:39.308203   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.308212   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:39.308218   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:39.308278   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:39.334096   45025 cri.go:89] found id: ""
	I1211 00:21:39.334113   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.334123   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:39.334132   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:39.334203   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:39.360088   45025 cri.go:89] found id: ""
	I1211 00:21:39.360101   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.360108   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:39.360115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:39.360174   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:39.386315   45025 cri.go:89] found id: ""
	I1211 00:21:39.386328   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.386336   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:39.386341   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:39.386399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:39.418994   45025 cri.go:89] found id: ""
	I1211 00:21:39.419008   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.419015   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:39.419020   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:39.419081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:39.446027   45025 cri.go:89] found id: ""
	I1211 00:21:39.446040   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.446047   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:39.446052   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:39.446119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:39.474854   45025 cri.go:89] found id: ""
	I1211 00:21:39.474867   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.474880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:39.474888   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:39.474898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:39.548615   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:39.548635   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:39.577039   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:39.577058   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:39.643644   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:39.643662   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:39.654782   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:39.654797   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:39.721483   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.221753   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:42.234138   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:42.234209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:42.265615   45025 cri.go:89] found id: ""
	I1211 00:21:42.265631   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.265639   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:42.265645   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:42.265716   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:42.295341   45025 cri.go:89] found id: ""
	I1211 00:21:42.295357   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.295365   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:42.295371   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:42.295432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:42.324010   45025 cri.go:89] found id: ""
	I1211 00:21:42.324025   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.324032   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:42.324039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:42.324101   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:42.355998   45025 cri.go:89] found id: ""
	I1211 00:21:42.356012   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.356020   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:42.356025   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:42.356087   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:42.385254   45025 cri.go:89] found id: ""
	I1211 00:21:42.385267   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.385275   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:42.385279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:42.385379   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:42.418942   45025 cri.go:89] found id: ""
	I1211 00:21:42.418956   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.418986   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:42.418993   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:42.419049   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:42.446484   45025 cri.go:89] found id: ""
	I1211 00:21:42.446497   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.446504   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:42.446511   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:42.446522   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:42.521774   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:42.521792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:42.533107   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:42.533124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:42.601857   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.601867   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:42.601877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:42.670754   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:42.670773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.205036   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:45.223242   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:45.223325   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:45.290545   45025 cri.go:89] found id: ""
	I1211 00:21:45.290560   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.290567   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:45.290580   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:45.290653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:45.321549   45025 cri.go:89] found id: ""
	I1211 00:21:45.321562   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.321581   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:45.321587   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:45.321660   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:45.351332   45025 cri.go:89] found id: ""
	I1211 00:21:45.351345   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.351353   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:45.351358   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:45.351418   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:45.377195   45025 cri.go:89] found id: ""
	I1211 00:21:45.377208   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.377215   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:45.377221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:45.377284   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:45.414830   45025 cri.go:89] found id: ""
	I1211 00:21:45.414844   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.414852   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:45.414857   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:45.414922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:45.444982   45025 cri.go:89] found id: ""
	I1211 00:21:45.444996   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.445003   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:45.445008   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:45.445065   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:45.475344   45025 cri.go:89] found id: ""
	I1211 00:21:45.475358   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.475365   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:45.475372   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:45.475388   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:45.544982   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:45.545000   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.578028   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:45.578044   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:45.650334   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:45.650360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:45.661530   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:45.661547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:45.726146   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.226425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:48.236595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:48.236655   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:48.264517   45025 cri.go:89] found id: ""
	I1211 00:21:48.264531   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.264538   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:48.264544   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:48.264602   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:48.291335   45025 cri.go:89] found id: ""
	I1211 00:21:48.291349   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.291356   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:48.291361   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:48.291420   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:48.317975   45025 cri.go:89] found id: ""
	I1211 00:21:48.317996   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.318005   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:48.318010   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:48.318090   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:48.343743   45025 cri.go:89] found id: ""
	I1211 00:21:48.343757   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.343764   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:48.343769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:48.343839   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:48.370548   45025 cri.go:89] found id: ""
	I1211 00:21:48.370561   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.370568   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:48.370573   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:48.370633   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:48.398956   45025 cri.go:89] found id: ""
	I1211 00:21:48.398991   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.398999   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:48.399004   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:48.399081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:48.432879   45025 cri.go:89] found id: ""
	I1211 00:21:48.432892   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.432900   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:48.432908   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:48.432918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:48.514612   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:48.514631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:48.526574   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:48.526589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:48.594430   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.594439   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:48.594449   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:48.662467   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:48.662487   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:51.193260   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:51.203850   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:51.203909   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:51.229218   45025 cri.go:89] found id: ""
	I1211 00:21:51.229232   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.229240   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:51.229249   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:51.229307   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:51.255535   45025 cri.go:89] found id: ""
	I1211 00:21:51.255549   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.255556   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:51.255561   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:51.255617   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:51.281281   45025 cri.go:89] found id: ""
	I1211 00:21:51.281295   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.281302   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:51.281306   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:51.281366   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:51.305242   45025 cri.go:89] found id: ""
	I1211 00:21:51.305256   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.305263   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:51.305268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:51.305324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:51.330682   45025 cri.go:89] found id: ""
	I1211 00:21:51.330695   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.330712   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:51.330717   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:51.330786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:51.361324   45025 cri.go:89] found id: ""
	I1211 00:21:51.361338   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.361345   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:51.361351   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:51.361410   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:51.387177   45025 cri.go:89] found id: ""
	I1211 00:21:51.387191   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.387198   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:51.387205   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:51.387216   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:51.461910   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:51.461927   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:51.473746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:51.473761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:51.542962   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:51.542994   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:51.543008   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:51.611981   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:51.612003   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:54.140885   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:54.151154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:54.151216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:54.177398   45025 cri.go:89] found id: ""
	I1211 00:21:54.177412   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.177419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:54.177424   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:54.177483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:54.202665   45025 cri.go:89] found id: ""
	I1211 00:21:54.202679   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.202686   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:54.202691   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:54.202751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:54.228121   45025 cri.go:89] found id: ""
	I1211 00:21:54.228135   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.228142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:54.228147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:54.228206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:54.254699   45025 cri.go:89] found id: ""
	I1211 00:21:54.254713   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.254726   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:54.254732   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:54.254794   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:54.280912   45025 cri.go:89] found id: ""
	I1211 00:21:54.280926   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.280934   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:54.280939   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:54.281000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:54.309917   45025 cri.go:89] found id: ""
	I1211 00:21:54.309930   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.309937   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:54.309943   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:54.310000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:54.335081   45025 cri.go:89] found id: ""
	I1211 00:21:54.335094   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.335102   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:54.335110   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:54.335120   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:54.402799   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:54.402819   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:54.423966   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:54.423982   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:54.493676   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:54.493685   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:54.493695   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:54.562184   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:54.562202   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.095145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:57.105735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:57.105793   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:57.137586   45025 cri.go:89] found id: ""
	I1211 00:21:57.137600   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.137607   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:57.137612   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:57.137669   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:57.162960   45025 cri.go:89] found id: ""
	I1211 00:21:57.162997   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.163004   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:57.163009   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:57.163068   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:57.189960   45025 cri.go:89] found id: ""
	I1211 00:21:57.189982   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.189989   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:57.189994   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:57.190059   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:57.215046   45025 cri.go:89] found id: ""
	I1211 00:21:57.215059   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.215067   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:57.215072   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:57.215129   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:57.239646   45025 cri.go:89] found id: ""
	I1211 00:21:57.239659   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.239678   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:57.239682   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:57.239737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:57.264818   45025 cri.go:89] found id: ""
	I1211 00:21:57.264832   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.264839   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:57.264844   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:57.264913   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:57.290063   45025 cri.go:89] found id: ""
	I1211 00:21:57.290076   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.290083   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:57.290090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:57.290103   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:57.300820   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:57.300834   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:57.366226   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:57.366236   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:57.366246   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:57.435439   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:57.435458   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.464292   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:57.464311   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.034825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:00.107263   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:22:00.107592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:22:00.209037   45025 cri.go:89] found id: ""
	I1211 00:22:00.209052   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.209060   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:22:00.209065   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:22:00.209139   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:22:00.259397   45025 cri.go:89] found id: ""
	I1211 00:22:00.259413   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.259420   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:22:00.259426   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:22:00.259499   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:22:00.300996   45025 cri.go:89] found id: ""
	I1211 00:22:00.301011   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.301020   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:22:00.301026   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:22:00.301121   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:22:00.355749   45025 cri.go:89] found id: ""
	I1211 00:22:00.355766   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.355775   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:22:00.355782   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:22:00.355863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:22:00.397265   45025 cri.go:89] found id: ""
	I1211 00:22:00.397279   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.397287   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:22:00.397292   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:22:00.397357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:22:00.431985   45025 cri.go:89] found id: ""
	I1211 00:22:00.432000   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.432008   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:22:00.432014   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:22:00.432079   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:22:00.475122   45025 cri.go:89] found id: ""
	I1211 00:22:00.475138   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.475145   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:22:00.475154   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:22:00.475165   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.544019   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:22:00.544039   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:22:00.556109   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:22:00.556126   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:22:00.625124   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:22:00.625135   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:22:00.625146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:22:00.693368   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:22:00.693387   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:22:03.226119   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:03.236558   45025 kubeadm.go:602] duration metric: took 4m3.502420888s to restartPrimaryControlPlane
	W1211 00:22:03.236621   45025 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 00:22:03.236698   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:22:03.653513   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:22:03.666451   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:22:03.674394   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:22:03.674497   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:22:03.682496   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:22:03.682506   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:22:03.682556   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:22:03.690253   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:22:03.690312   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:22:03.697814   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:22:03.705532   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:22:03.705584   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:22:03.712909   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.720642   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:22:03.720704   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.728085   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:22:03.735639   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:22:03.735694   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:22:03.743458   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:22:03.864690   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:22:03.865125   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:22:03.931571   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:26:05.371070   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1211 00:26:05.371093   45025 kubeadm.go:319] 
	I1211 00:26:05.371179   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:26:05.375684   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.375734   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:05.375839   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:05.375903   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:05.375950   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:05.375995   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:05.376042   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:05.376088   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:05.376135   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:05.376181   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:05.376229   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:05.376273   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:05.376319   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:05.376364   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:05.376435   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:05.376530   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:05.376618   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:05.376680   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:05.379737   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:05.379839   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:05.379918   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:05.380012   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:05.380083   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:05.380156   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:05.380207   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:05.380283   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:05.380352   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:05.380433   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:05.380508   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:05.380558   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:05.380610   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:05.380656   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:05.380709   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:05.380759   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:05.380821   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:05.380871   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:05.380957   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:05.381029   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:05.383945   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:05.384057   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:05.384159   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:05.384228   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:05.384331   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:05.384422   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:05.384548   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:05.384657   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:05.384704   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:05.384857   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:05.384973   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:26:05.385047   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001182146s
	I1211 00:26:05.385051   45025 kubeadm.go:319] 
	I1211 00:26:05.385122   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:26:05.385153   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:26:05.385275   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:26:05.385279   45025 kubeadm.go:319] 
	I1211 00:26:05.385390   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:26:05.385422   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:26:05.385452   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:26:05.385461   45025 kubeadm.go:319] 
	W1211 00:26:05.385565   45025 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182146s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1211 00:26:05.385656   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:26:05.805014   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:26:05.817222   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:26:05.817275   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:26:05.825148   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:26:05.825157   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:26:05.825207   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:26:05.832932   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:26:05.832991   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:26:05.840249   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:26:05.848087   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:26:05.848149   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:26:05.855944   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.863906   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:26:05.863960   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.871464   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:26:05.879062   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:26:05.879116   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:26:05.886444   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:26:05.923722   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.924046   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:06.002092   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:06.002152   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:06.002191   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:06.002233   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:06.002283   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:06.002332   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:06.002377   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:06.002429   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:06.002486   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:06.002528   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:06.002578   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:06.002626   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:06.076323   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:06.076462   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:06.076570   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:06.087446   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:06.092847   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:06.092964   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:06.093051   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:06.093134   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:06.093195   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:06.093273   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:06.093327   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:06.093390   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:06.093452   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:06.093529   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:06.093602   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:06.093639   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:06.093696   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:06.504239   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:06.701840   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:07.114481   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:07.226723   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:07.349377   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:07.350330   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:07.353007   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:07.356354   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:07.356511   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:07.356601   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:07.356672   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:07.373379   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:07.373693   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:07.381535   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:07.381916   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:07.382096   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:07.509380   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:07.509514   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:30:07.509220   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00004985s
	I1211 00:30:07.509346   45025 kubeadm.go:319] 
	I1211 00:30:07.509429   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:30:07.509464   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:30:07.509569   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:30:07.509574   45025 kubeadm.go:319] 
	I1211 00:30:07.509677   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:30:07.509708   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:30:07.509737   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:30:07.509740   45025 kubeadm.go:319] 
	I1211 00:30:07.513952   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:30:07.514370   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:30:07.514477   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:30:07.514741   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1211 00:30:07.514745   45025 kubeadm.go:319] 
	I1211 00:30:07.514828   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:30:07.514885   45025 kubeadm.go:403] duration metric: took 12m7.817411267s to StartCluster
	I1211 00:30:07.514914   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:30:07.514994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:30:07.541269   45025 cri.go:89] found id: ""
	I1211 00:30:07.541283   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.541291   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:30:07.541299   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:30:07.541373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:30:07.568371   45025 cri.go:89] found id: ""
	I1211 00:30:07.568385   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.568392   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:30:07.568397   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:30:07.568452   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:30:07.593463   45025 cri.go:89] found id: ""
	I1211 00:30:07.593477   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.593484   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:30:07.593489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:30:07.593551   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:30:07.617718   45025 cri.go:89] found id: ""
	I1211 00:30:07.617732   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.617739   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:30:07.617746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:30:07.617801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:30:07.644176   45025 cri.go:89] found id: ""
	I1211 00:30:07.644190   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.644197   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:30:07.644202   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:30:07.644260   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:30:07.673956   45025 cri.go:89] found id: ""
	I1211 00:30:07.673970   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.673977   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:30:07.673982   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:30:07.674040   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:30:07.699591   45025 cri.go:89] found id: ""
	I1211 00:30:07.699605   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.699612   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:30:07.699619   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:30:07.699631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:30:07.710731   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:30:07.710746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:30:07.782904   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:30:07.782915   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:30:07.782925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:30:07.853292   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:30:07.853310   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:30:07.882071   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:30:07.882089   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 00:30:07.951740   45025 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1211 00:30:07.951780   45025 out.go:285] * 
	W1211 00:30:07.951888   45025 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.951950   45025 out.go:285] * 
	W1211 00:30:07.954090   45025 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:30:07.959721   45025 out.go:203] 
	W1211 00:30:07.962947   45025 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.963287   45025 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1211 00:30:07.963357   45025 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1211 00:30:07.966374   45025 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559941409Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559978333Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560024989Z" level=info msg="Create NRI interface"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560126324Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560135908Z" level=info msg="runtime interface created"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560147707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560154386Z" level=info msg="runtime interface starting up..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560161025Z" level=info msg="starting plugins..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560173825Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560247985Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:17:58 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.935283532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6de6e87e-5991-43bc-b331-3c4da3939cd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936110736Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ed5fc17-8833-4a00-b49a-175298f161c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936663858Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5386bae8-3763-43a3-8e84-b7f98f5b64ad name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937146602Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bb2a1f8a-e043-498b-9aaf-3f590536bef8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937597116Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=46623a2c-7e86-46f1-9f50-faf880a0f7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938029611Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aacc6a08-88ba-4e77-9e82-199f5f521e79 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938428834Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1070a90a-4ba9-466d-bf22-501c564282df name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.079523143Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a3e30ef-9eb4-44b7-80b3-789735758754 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080212934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1702c38-afbc-48d3-aaa7-dbad7d98554e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080781133Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4a323cb1-ab88-481e-9cee-f539f47c462d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.081259674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11defec5-3e05-48c6-9020-9fbe1396c100 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08179049Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=befd3141-5ed6-4610-bc01-9a813a131605 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08229226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=882be104-d73f-4553-a30a-8e88aacff392 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.082743281Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c95dd536-8fa6-4a4b-9d2c-8647b294d5c0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:30:09.149057   21258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:09.149786   21258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:09.151488   21258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:09.152225   21258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:09.153831   21258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:30:09 up 41 min,  0 user,  load average: 0.09, 0.26, 0.40
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:30:06 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:30:07 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 11 00:30:07 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:07 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:07 functional-786978 kubelet[21063]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:07 functional-786978 kubelet[21063]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:07 functional-786978 kubelet[21063]: E1211 00:30:07.198345   21063 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:30:07 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:30:07 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:30:07 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 11 00:30:07 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:07 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:07 functional-786978 kubelet[21149]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:07 functional-786978 kubelet[21149]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:07 functional-786978 kubelet[21149]: E1211 00:30:07.967878   21149 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:30:07 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:30:07 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:30:08 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 11 00:30:08 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:08 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:08 functional-786978 kubelet[21170]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:08 functional-786978 kubelet[21170]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:08 functional-786978 kubelet[21170]: E1211 00:30:08.724135   21170 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:30:08 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:30:08 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
E1211 00:30:09.648127    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (339.781843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-786978 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-786978 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (65.669711ms)

                                                
                                                
** stderr ** 
	E1211 00:30:10.123869   57089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:30:10.125420   57089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:30:10.127513   57089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:30:10.129111   57089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:30:10.130690   57089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-786978 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (326.080404ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-976823 image ls --format json --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ ssh     │ functional-976823 ssh pgrep buildkitd                                                                                                             │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ image   │ functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr                                            │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls                                                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format yaml --alsologtostderr                                                                                        │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ image   │ functional-976823 image ls --format table --alsologtostderr                                                                                       │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ delete  │ -p functional-976823                                                                                                                              │ functional-976823 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │ 11 Dec 25 00:03 UTC │
	│ start   │ -p functional-786978 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:03 UTC │                     │
	│ start   │ -p functional-786978 --alsologtostderr -v=8                                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:11 UTC │                     │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add registry.k8s.io/pause:latest                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache add minikube-local-cache-test:functional-786978                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ functional-786978 cache delete minikube-local-cache-test:functional-786978                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl images                                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ cache   │ functional-786978 cache reload                                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ kubectl │ functional-786978 kubectl -- --context functional-786978 get pods                                                                                 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ start   │ -p functional-786978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:17:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:17:55.340423   45025 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:17:55.340537   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340541   45025 out.go:374] Setting ErrFile to fd 2...
	I1211 00:17:55.340544   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340791   45025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:17:55.341139   45025 out.go:368] Setting JSON to false
	I1211 00:17:55.342235   45025 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1762,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:17:55.342290   45025 start.go:143] virtualization:  
	I1211 00:17:55.345626   45025 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:17:55.349437   45025 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:17:55.349518   45025 notify.go:221] Checking for updates...
	I1211 00:17:55.355612   45025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:17:55.358489   45025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:17:55.361319   45025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:17:55.364268   45025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:17:55.367246   45025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:17:55.370742   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:55.370850   45025 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:17:55.397690   45025 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:17:55.397801   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.502686   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.493021097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.502775   45025 docker.go:319] overlay module found
	I1211 00:17:55.506026   45025 out.go:179] * Using the docker driver based on existing profile
	I1211 00:17:55.508857   45025 start.go:309] selected driver: docker
	I1211 00:17:55.508866   45025 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.508963   45025 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:17:55.509064   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.563622   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.55460881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.564041   45025 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 00:17:55.564074   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:55.564121   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:55.564168   45025 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.567337   45025 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:17:55.570124   45025 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:17:55.572957   45025 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:17:55.575721   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:55.575758   45025 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:17:55.575767   45025 cache.go:65] Caching tarball of preloaded images
	I1211 00:17:55.575808   45025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:17:55.575848   45025 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:17:55.575857   45025 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:17:55.575972   45025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:17:55.595069   45025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:17:55.595078   45025 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:17:55.595099   45025 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:17:55.595134   45025 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:17:55.595195   45025 start.go:364] duration metric: took 45.113µs to acquireMachinesLock for "functional-786978"
	I1211 00:17:55.595213   45025 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:17:55.595217   45025 fix.go:54] fixHost starting: 
	I1211 00:17:55.595484   45025 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:17:55.612234   45025 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:17:55.612254   45025 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:17:55.615553   45025 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:17:55.615576   45025 machine.go:94] provisionDockerMachine start ...
	I1211 00:17:55.615650   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.633023   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.633331   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.633337   45025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:17:55.782629   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.782643   45025 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:17:55.782717   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.800268   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.800560   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.800569   45025 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:17:55.960068   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.960134   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.979369   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.979668   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.979683   45025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:17:56.131539   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:17:56.131559   45025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:17:56.131581   45025 ubuntu.go:190] setting up certificates
	I1211 00:17:56.131589   45025 provision.go:84] configureAuth start
	I1211 00:17:56.131663   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:56.153195   45025 provision.go:143] copyHostCerts
	I1211 00:17:56.153275   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:17:56.153283   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:17:56.153368   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:17:56.153542   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:17:56.153546   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:17:56.153590   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:17:56.153677   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:17:56.153682   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:17:56.153707   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:17:56.153777   45025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:17:56.467494   45025 provision.go:177] copyRemoteCerts
	I1211 00:17:56.467553   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:17:56.467596   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.484090   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:56.587917   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:17:56.605865   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 00:17:56.622832   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:17:56.639884   45025 provision.go:87] duration metric: took 508.274173ms to configureAuth
	I1211 00:17:56.639901   45025 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:17:56.640097   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:56.640201   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.656951   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:56.657259   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:56.657272   45025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:17:57.016039   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:17:57.016056   45025 machine.go:97] duration metric: took 1.400473029s to provisionDockerMachine
	I1211 00:17:57.016068   45025 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:17:57.016080   45025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:17:57.016152   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:17:57.016210   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.035864   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.138938   45025 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:17:57.142378   45025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:17:57.142395   45025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:17:57.142405   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:17:57.142462   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:17:57.142546   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:17:57.142617   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:17:57.142658   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:17:57.149965   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:57.167412   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:17:57.184830   45025 start.go:296] duration metric: took 168.748285ms for postStartSetup
	I1211 00:17:57.184913   45025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:17:57.184954   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.203305   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.304245   45025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:17:57.309118   45025 fix.go:56] duration metric: took 1.713893936s for fixHost
	I1211 00:17:57.309133   45025 start.go:83] releasing machines lock for "functional-786978", held for 1.713931903s
	I1211 00:17:57.309206   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:57.326163   45025 ssh_runner.go:195] Run: cat /version.json
	I1211 00:17:57.326207   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.326441   45025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:17:57.326492   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.346150   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.355283   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.447048   45025 ssh_runner.go:195] Run: systemctl --version
	I1211 00:17:57.543733   45025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:17:57.583708   45025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 00:17:57.588962   45025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:17:57.589026   45025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:17:57.598123   45025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:17:57.598147   45025 start.go:496] detecting cgroup driver to use...
	I1211 00:17:57.598178   45025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:17:57.598242   45025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:17:57.616553   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:17:57.632037   45025 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:17:57.632116   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:17:57.648871   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:17:57.662555   45025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:17:57.780641   45025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:17:57.896253   45025 docker.go:234] disabling docker service ...
	I1211 00:17:57.896308   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:17:57.910709   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:17:57.923903   45025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:17:58.032234   45025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:17:58.154255   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:17:58.166925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:17:58.180565   45025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:17:58.180619   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.189311   45025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:17:58.189376   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.198596   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.207202   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.215908   45025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:17:58.223742   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.232864   45025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.241359   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.249993   45025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:17:58.257330   45025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:17:58.264525   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.395006   45025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:17:58.567132   45025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:17:58.567191   45025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:17:58.572106   45025 start.go:564] Will wait 60s for crictl version
	I1211 00:17:58.572166   45025 ssh_runner.go:195] Run: which crictl
	I1211 00:17:58.576600   45025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:17:58.605345   45025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:17:58.605434   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.635482   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.670505   45025 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:17:58.673486   45025 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:17:58.691254   45025 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:17:58.698413   45025 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1211 00:17:58.701098   45025 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:17:58.701227   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:58.701291   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.741056   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.741070   45025 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:17:58.741127   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.766313   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.766324   45025 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:17:58.766330   45025 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:17:58.766420   45025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:17:58.766498   45025 ssh_runner.go:195] Run: crio config
	I1211 00:17:58.831179   45025 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1211 00:17:58.831214   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:58.831224   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:58.831240   45025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:17:58.831262   45025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:17:58.831383   45025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:17:58.831452   45025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:17:58.839023   45025 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:17:58.839084   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:17:58.846528   45025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:17:58.859010   45025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:17:58.871952   45025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1211 00:17:58.884395   45025 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:17:58.888346   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.999004   45025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:17:59.014620   45025 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:17:59.014632   45025 certs.go:195] generating shared ca certs ...
	I1211 00:17:59.014647   45025 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:17:59.014834   45025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:17:59.014887   45025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:17:59.014894   45025 certs.go:257] generating profile certs ...
	I1211 00:17:59.015111   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:17:59.015168   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:17:59.015206   45025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:17:59.015330   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:17:59.015361   45025 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:17:59.015369   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:17:59.015399   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:17:59.015424   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:17:59.015449   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:17:59.015495   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:59.016236   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:17:59.036319   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:17:59.054207   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:17:59.085140   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:17:59.102589   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:17:59.119619   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:17:59.137775   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:17:59.155046   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:17:59.173200   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:17:59.191371   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:17:59.208847   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:17:59.225559   45025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:17:59.238258   45025 ssh_runner.go:195] Run: openssl version
	I1211 00:17:59.244279   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.251482   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:17:59.258806   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262560   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262615   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.303500   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:17:59.310986   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.318422   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:17:59.325839   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329190   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329239   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.369865   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:17:59.377731   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.385365   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:17:59.392850   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396464   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396534   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.437551   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:17:59.445097   45025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:17:59.449099   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:17:59.490493   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:17:59.531562   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:17:59.572726   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:17:59.613479   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:17:59.656606   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:17:59.697483   45025 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:59.697558   45025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:17:59.697631   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.726147   45025 cri.go:89] found id: ""
	I1211 00:17:59.726208   45025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:17:59.734119   45025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:17:59.734129   45025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:17:59.734181   45025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:17:59.741669   45025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.742193   45025 kubeconfig.go:125] found "functional-786978" server: "https://192.168.49.2:8441"
	I1211 00:17:59.743487   45025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:17:59.751799   45025 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-11 00:03:23.654512319 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-11 00:17:58.880060835 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1211 00:17:59.751819   45025 kubeadm.go:1161] stopping kube-system containers ...
	I1211 00:17:59.751836   45025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1211 00:17:59.751895   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.779633   45025 cri.go:89] found id: ""
	I1211 00:17:59.779698   45025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 00:17:59.796551   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:17:59.805010   45025 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 11 00:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 11 00:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 11 00:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 11 00:07 /etc/kubernetes/scheduler.conf
	
	I1211 00:17:59.805070   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:17:59.813093   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:17:59.820917   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.820973   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:17:59.828623   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.836494   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.836548   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.843945   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:17:59.851499   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.851553   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:17:59.859289   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:17:59.867193   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:17:59.916974   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.185880   45025 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.268883094s)
	I1211 00:18:02.185949   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.399533   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.467551   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.514148   45025 api_server.go:52] waiting for apiserver process to appear ...
	I1211 00:18:02.514234   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.014347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.515068   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.014554   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.515116   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.016511   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.515100   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.017684   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.515326   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.014433   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.515145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.014543   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.514950   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.015735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.514456   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.015825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.514630   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.015335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.514451   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.014804   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.514494   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.015458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.514452   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.014884   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.514333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.022420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.515034   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.017224   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.514464   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.015399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.514329   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.015271   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.017520   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.514376   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.017541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.515013   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.017761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.514358   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.014403   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.514344   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.017371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.515172   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.016422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.514490   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.020263   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.514922   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.014789   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.514345   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.015761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.514955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.018541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.514310   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.014448   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.514337   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.018852   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.515041   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.020888   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.514298   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.022333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.515045   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.014735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.514347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.017953   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.515070   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.015196   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.514355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.014375   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.514335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.014528   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.514323   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.014416   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.515174   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.014438   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.514458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.021545   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.514947   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.016088   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.514879   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.014943   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.514386   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.016904   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.515352   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.015231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.514894   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.014476   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.514778   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.016439   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.515114   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.014420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.514853   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.016610   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.514436   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.014585   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.514442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.014533   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.514763   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.016122   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.514418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.015418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.014702   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.515080   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.015415   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.514399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.016231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.514627   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.015154   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.515225   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.020324   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.514495   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.015016   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.514389   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.018412   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.515094   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.018157   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.515152   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.014878   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.514507   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.015181   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.514444   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:02.514543   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:02.540506   45025 cri.go:89] found id: ""
	I1211 00:19:02.540520   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.540528   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:02.540533   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:02.540593   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:02.567414   45025 cri.go:89] found id: ""
	I1211 00:19:02.567427   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.567434   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:02.567439   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:02.567500   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:02.598249   45025 cri.go:89] found id: ""
	I1211 00:19:02.598263   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.598270   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:02.598277   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:02.598348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:02.624793   45025 cri.go:89] found id: ""
	I1211 00:19:02.624807   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.624822   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:02.624828   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:02.624894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:02.654153   45025 cri.go:89] found id: ""
	I1211 00:19:02.654170   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.654177   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:02.654182   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:02.654251   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:02.682217   45025 cri.go:89] found id: ""
	I1211 00:19:02.682231   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.682239   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:02.682244   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:02.682304   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:02.708660   45025 cri.go:89] found id: ""
	I1211 00:19:02.708674   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.708682   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:02.708690   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:02.708700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:02.775902   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:02.775921   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:02.787446   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:02.787463   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:02.857001   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:02.857011   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:02.857022   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:02.927792   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:02.927812   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:05.458523   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:05.468377   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:05.468436   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:05.492943   45025 cri.go:89] found id: ""
	I1211 00:19:05.492957   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.492963   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:05.492968   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:05.493030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:05.520504   45025 cri.go:89] found id: ""
	I1211 00:19:05.520517   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.520525   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:05.520530   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:05.520592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:05.551505   45025 cri.go:89] found id: ""
	I1211 00:19:05.551518   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.551525   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:05.551531   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:05.551586   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:05.580658   45025 cri.go:89] found id: ""
	I1211 00:19:05.580672   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.580679   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:05.580683   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:05.580757   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:05.607012   45025 cri.go:89] found id: ""
	I1211 00:19:05.607026   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.607033   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:05.607038   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:05.607102   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:05.632061   45025 cri.go:89] found id: ""
	I1211 00:19:05.632075   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.632082   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:05.632087   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:05.632152   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:05.658481   45025 cri.go:89] found id: ""
	I1211 00:19:05.658494   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.658514   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:05.658522   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:05.658533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:05.724859   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:05.724876   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:05.735886   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:05.735901   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:05.798612   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:05.798622   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:05.798634   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:05.867342   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:05.867360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:08.400995   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:08.413387   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:08.413449   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:08.448131   45025 cri.go:89] found id: ""
	I1211 00:19:08.448144   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.448151   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:08.448157   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:08.448216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:08.477588   45025 cri.go:89] found id: ""
	I1211 00:19:08.477601   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.477608   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:08.477612   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:08.477671   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:08.502742   45025 cri.go:89] found id: ""
	I1211 00:19:08.502755   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.502763   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:08.502768   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:08.502826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:08.528585   45025 cri.go:89] found id: ""
	I1211 00:19:08.528598   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.528606   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:08.528611   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:08.528674   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:08.559543   45025 cri.go:89] found id: ""
	I1211 00:19:08.559557   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.559564   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:08.559569   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:08.559630   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:08.585362   45025 cri.go:89] found id: ""
	I1211 00:19:08.585377   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.585384   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:08.585390   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:08.585462   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:08.611828   45025 cri.go:89] found id: ""
	I1211 00:19:08.611842   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.611849   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:08.611856   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:08.611866   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:08.678470   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:08.678488   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:08.691361   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:08.691376   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:08.762621   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:08.762636   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:08.762649   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:08.832475   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:08.832493   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:11.361776   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:11.371640   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:11.371694   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:11.398476   45025 cri.go:89] found id: ""
	I1211 00:19:11.398489   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.398496   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:11.398501   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:11.398559   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:11.429955   45025 cri.go:89] found id: ""
	I1211 00:19:11.429969   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.429976   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:11.429982   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:11.430037   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:11.457296   45025 cri.go:89] found id: ""
	I1211 00:19:11.457309   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.457316   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:11.457324   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:11.457382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:11.482941   45025 cri.go:89] found id: ""
	I1211 00:19:11.482954   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.482962   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:11.483012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:11.483069   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:11.508408   45025 cri.go:89] found id: ""
	I1211 00:19:11.508431   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.508438   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:11.508443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:11.508510   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:11.533840   45025 cri.go:89] found id: ""
	I1211 00:19:11.533854   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.533869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:11.533875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:11.533950   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:11.559317   45025 cri.go:89] found id: ""
	I1211 00:19:11.559331   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.559338   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:11.559345   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:11.559354   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:11.626027   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:11.626045   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:11.637884   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:11.637900   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:11.704689   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:11.704700   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:11.704711   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:11.774803   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:11.774821   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.306913   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:14.318077   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:14.318146   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:14.343407   45025 cri.go:89] found id: ""
	I1211 00:19:14.343421   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.343428   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:14.343433   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:14.343497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:14.370322   45025 cri.go:89] found id: ""
	I1211 00:19:14.370336   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.370342   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:14.370348   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:14.370406   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:14.397449   45025 cri.go:89] found id: ""
	I1211 00:19:14.397462   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.397469   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:14.397474   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:14.397531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:14.430459   45025 cri.go:89] found id: ""
	I1211 00:19:14.430472   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.430479   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:14.430501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:14.430595   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:14.461756   45025 cri.go:89] found id: ""
	I1211 00:19:14.461769   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.461776   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:14.461781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:14.461849   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:14.488174   45025 cri.go:89] found id: ""
	I1211 00:19:14.488189   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.488196   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:14.488201   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:14.488258   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:14.517330   45025 cri.go:89] found id: ""
	I1211 00:19:14.517343   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.517350   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:14.517357   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:14.517368   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.549197   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:14.549215   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:14.618908   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:14.618926   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:14.630263   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:14.630279   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:14.698427   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:14.698437   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:14.698453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.273043   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:17.283257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:17.283323   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:17.308437   45025 cri.go:89] found id: ""
	I1211 00:19:17.308450   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.308457   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:17.308462   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:17.308522   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:17.337454   45025 cri.go:89] found id: ""
	I1211 00:19:17.337467   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.337474   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:17.337479   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:17.337538   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:17.363695   45025 cri.go:89] found id: ""
	I1211 00:19:17.363709   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.363717   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:17.363722   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:17.363781   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:17.388300   45025 cri.go:89] found id: ""
	I1211 00:19:17.388314   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.388321   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:17.388327   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:17.388383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:17.418934   45025 cri.go:89] found id: ""
	I1211 00:19:17.418947   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.418954   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:17.418959   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:17.419036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:17.453193   45025 cri.go:89] found id: ""
	I1211 00:19:17.453207   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.453214   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:17.453220   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:17.453308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:17.487806   45025 cri.go:89] found id: ""
	I1211 00:19:17.487820   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.487827   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:17.487834   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:17.487845   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:17.553739   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:17.553758   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:17.564920   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:17.564936   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:17.630666   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:17.630680   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:17.630705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.701596   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:17.701614   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:20.234880   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:20.244988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:20.245050   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:20.273088   45025 cri.go:89] found id: ""
	I1211 00:19:20.273101   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.273109   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:20.273114   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:20.273175   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:20.302062   45025 cri.go:89] found id: ""
	I1211 00:19:20.302076   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.302083   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:20.302089   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:20.302157   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:20.326827   45025 cri.go:89] found id: ""
	I1211 00:19:20.326841   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.326859   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:20.326865   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:20.326922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:20.356288   45025 cri.go:89] found id: ""
	I1211 00:19:20.356302   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.356309   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:20.356315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:20.356375   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:20.382358   45025 cri.go:89] found id: ""
	I1211 00:19:20.382373   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.382380   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:20.382386   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:20.382445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:20.417393   45025 cri.go:89] found id: ""
	I1211 00:19:20.417407   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.417424   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:20.417430   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:20.417488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:20.447521   45025 cri.go:89] found id: ""
	I1211 00:19:20.447534   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.447541   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:20.447550   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:20.447560   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:20.518467   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:20.518484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:20.530666   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:20.530681   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:20.599280   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:20.599290   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:20.599301   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:20.666760   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:20.666778   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.200454   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:23.210413   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:23.210471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:23.234734   45025 cri.go:89] found id: ""
	I1211 00:19:23.234748   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.234756   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:23.234761   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:23.234822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:23.260526   45025 cri.go:89] found id: ""
	I1211 00:19:23.260540   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.260547   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:23.260552   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:23.260611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:23.284278   45025 cri.go:89] found id: ""
	I1211 00:19:23.284291   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.284298   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:23.284303   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:23.284360   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:23.309416   45025 cri.go:89] found id: ""
	I1211 00:19:23.309431   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.309438   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:23.309443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:23.309502   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:23.335667   45025 cri.go:89] found id: ""
	I1211 00:19:23.335682   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.335689   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:23.335695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:23.335751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:23.364847   45025 cri.go:89] found id: ""
	I1211 00:19:23.364862   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.364869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:23.364875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:23.364941   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:23.389436   45025 cri.go:89] found id: ""
	I1211 00:19:23.389449   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.389457   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:23.389464   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:23.389477   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:23.402133   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:23.402149   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:23.484989   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:23.484999   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:23.485010   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:23.553567   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:23.553586   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.583342   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:23.583359   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.151360   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:26.161613   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:26.161676   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:26.187432   45025 cri.go:89] found id: ""
	I1211 00:19:26.187446   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.187453   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:26.187459   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:26.187514   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:26.212567   45025 cri.go:89] found id: ""
	I1211 00:19:26.212581   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.212588   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:26.212593   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:26.212650   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:26.238347   45025 cri.go:89] found id: ""
	I1211 00:19:26.238359   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.238367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:26.238372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:26.238426   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:26.264493   45025 cri.go:89] found id: ""
	I1211 00:19:26.264506   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.264513   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:26.264518   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:26.264578   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:26.289421   45025 cri.go:89] found id: ""
	I1211 00:19:26.289435   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.289442   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:26.289446   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:26.289512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:26.317737   45025 cri.go:89] found id: ""
	I1211 00:19:26.317751   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.317758   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:26.317776   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:26.317832   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:26.342012   45025 cri.go:89] found id: ""
	I1211 00:19:26.342025   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.342032   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:26.342039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:26.342049   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:26.409907   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:26.409925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:26.444709   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:26.444725   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.520673   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:26.520692   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:26.533201   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:26.533217   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:26.595360   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.096255   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:29.106290   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:29.106348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:29.135863   45025 cri.go:89] found id: ""
	I1211 00:19:29.135876   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.135883   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:29.135888   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:29.135948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:29.162996   45025 cri.go:89] found id: ""
	I1211 00:19:29.163011   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.163018   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:29.163024   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:29.163104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:29.189722   45025 cri.go:89] found id: ""
	I1211 00:19:29.189738   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.189745   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:29.189749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:29.189834   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:29.215022   45025 cri.go:89] found id: ""
	I1211 00:19:29.215036   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.215042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:29.215047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:29.215106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:29.240657   45025 cri.go:89] found id: ""
	I1211 00:19:29.240671   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.240679   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:29.240684   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:29.240744   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:29.265406   45025 cri.go:89] found id: ""
	I1211 00:19:29.265420   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.265427   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:29.265432   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:29.265488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:29.289115   45025 cri.go:89] found id: ""
	I1211 00:19:29.289128   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.289136   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:29.289143   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:29.289154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:29.316627   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:29.316646   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:29.381873   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:29.381892   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:29.392836   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:29.392852   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:29.474052   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.474062   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:29.474072   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.041538   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:32.052288   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:32.052353   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:32.078058   45025 cri.go:89] found id: ""
	I1211 00:19:32.078071   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.078078   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:32.078084   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:32.078143   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:32.104226   45025 cri.go:89] found id: ""
	I1211 00:19:32.104240   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.104251   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:32.104256   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:32.104315   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:32.130104   45025 cri.go:89] found id: ""
	I1211 00:19:32.130123   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.130130   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:32.130135   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:32.130196   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:32.156116   45025 cri.go:89] found id: ""
	I1211 00:19:32.156131   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.156138   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:32.156143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:32.156204   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:32.182027   45025 cri.go:89] found id: ""
	I1211 00:19:32.182039   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.182046   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:32.182051   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:32.182119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:32.206462   45025 cri.go:89] found id: ""
	I1211 00:19:32.206476   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.206483   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:32.206488   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:32.206553   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:32.230714   45025 cri.go:89] found id: ""
	I1211 00:19:32.230727   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.230734   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:32.230757   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:32.230773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:32.295411   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:32.295430   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:32.306690   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:32.306705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:32.373425   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:32.373435   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:32.373446   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.441247   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:32.441264   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:34.988442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:34.998718   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:34.998785   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:35.036207   45025 cri.go:89] found id: ""
	I1211 00:19:35.036221   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.036231   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:35.036236   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:35.036298   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:35.062611   45025 cri.go:89] found id: ""
	I1211 00:19:35.062624   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.062631   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:35.062636   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:35.062692   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:35.089089   45025 cri.go:89] found id: ""
	I1211 00:19:35.089102   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.089109   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:35.089115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:35.089177   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:35.116537   45025 cri.go:89] found id: ""
	I1211 00:19:35.116550   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.116558   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:35.116564   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:35.116625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:35.141369   45025 cri.go:89] found id: ""
	I1211 00:19:35.141383   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.141390   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:35.141396   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:35.141464   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:35.167717   45025 cri.go:89] found id: ""
	I1211 00:19:35.167731   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.167738   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:35.167746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:35.167805   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:35.193275   45025 cri.go:89] found id: ""
	I1211 00:19:35.193288   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.193295   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:35.193303   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:35.193313   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:35.223396   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:35.223412   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:35.291423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:35.291442   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:35.302744   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:35.302760   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:35.366712   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:35.366722   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:35.366732   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:37.940570   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:37.951183   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:37.951244   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:37.977384   45025 cri.go:89] found id: ""
	I1211 00:19:37.977412   45025 logs.go:282] 0 containers: []
	W1211 00:19:37.977419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:37.977425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:37.977489   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:38.002327   45025 cri.go:89] found id: ""
	I1211 00:19:38.002341   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.002349   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:38.002354   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:38.002433   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:38.032932   45025 cri.go:89] found id: ""
	I1211 00:19:38.032947   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.032955   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:38.032960   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:38.033023   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:38.060494   45025 cri.go:89] found id: ""
	I1211 00:19:38.060508   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.060516   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:38.060522   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:38.060584   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:38.090424   45025 cri.go:89] found id: ""
	I1211 00:19:38.090438   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.090445   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:38.090450   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:38.090511   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:38.117237   45025 cri.go:89] found id: ""
	I1211 00:19:38.117250   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.117258   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:38.117268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:38.117330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:38.144173   45025 cri.go:89] found id: ""
	I1211 00:19:38.144187   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.144195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:38.144203   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:38.144213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:38.213450   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:38.213474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:38.224711   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:38.224727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:38.292623   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:38.292634   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:38.292644   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:38.360121   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:38.360139   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:40.897394   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:40.907308   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:40.907368   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:40.935845   45025 cri.go:89] found id: ""
	I1211 00:19:40.935861   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.935868   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:40.935874   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:40.935936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:40.961885   45025 cri.go:89] found id: ""
	I1211 00:19:40.961899   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.961906   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:40.961911   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:40.961972   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:40.992115   45025 cri.go:89] found id: ""
	I1211 00:19:40.992129   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.992136   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:40.992141   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:40.992199   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:41.017243   45025 cri.go:89] found id: ""
	I1211 00:19:41.017259   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.017269   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:41.017274   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:41.017355   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:41.046002   45025 cri.go:89] found id: ""
	I1211 00:19:41.046016   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.046022   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:41.046027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:41.046097   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:41.072198   45025 cri.go:89] found id: ""
	I1211 00:19:41.072212   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.072220   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:41.072225   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:41.072297   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:41.097305   45025 cri.go:89] found id: ""
	I1211 00:19:41.097319   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.097326   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:41.097352   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:41.097363   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:41.163075   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:41.163095   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:41.174199   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:41.174214   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:41.239512   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:41.239535   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:41.239556   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:41.311901   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:41.311918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:43.842688   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:43.853001   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:43.853061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:43.877321   45025 cri.go:89] found id: ""
	I1211 00:19:43.877335   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.877342   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:43.877347   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:43.877403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:43.905861   45025 cri.go:89] found id: ""
	I1211 00:19:43.905874   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.905882   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:43.905887   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:43.905948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:43.931275   45025 cri.go:89] found id: ""
	I1211 00:19:43.931289   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.931309   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:43.931315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:43.931383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:43.957472   45025 cri.go:89] found id: ""
	I1211 00:19:43.957485   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.957492   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:43.957497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:43.957556   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:43.987995   45025 cri.go:89] found id: ""
	I1211 00:19:43.988009   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.988016   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:43.988022   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:43.988082   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:44.015918   45025 cri.go:89] found id: ""
	I1211 00:19:44.015934   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.015942   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:44.015948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:44.016028   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:44.044784   45025 cri.go:89] found id: ""
	I1211 00:19:44.044797   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.044804   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:44.044812   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:44.044825   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:44.111423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:44.111440   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:44.122746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:44.122766   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:44.196525   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:44.196536   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:44.196547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:44.264322   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:44.264340   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:46.797073   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:46.807248   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:46.807312   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:46.833629   45025 cri.go:89] found id: ""
	I1211 00:19:46.833643   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.833650   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:46.833656   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:46.833722   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:46.860316   45025 cri.go:89] found id: ""
	I1211 00:19:46.860329   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.860337   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:46.860342   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:46.860403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:46.886240   45025 cri.go:89] found id: ""
	I1211 00:19:46.886253   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.886261   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:46.886265   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:46.886324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:46.911538   45025 cri.go:89] found id: ""
	I1211 00:19:46.911552   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.911559   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:46.911565   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:46.911625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:46.938014   45025 cri.go:89] found id: ""
	I1211 00:19:46.938029   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.938036   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:46.938041   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:46.938105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:46.965253   45025 cri.go:89] found id: ""
	I1211 00:19:46.965267   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.965274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:46.965279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:46.965339   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:46.991686   45025 cri.go:89] found id: ""
	I1211 00:19:46.991699   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.991706   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:46.991714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:46.991727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:47.057610   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:47.057627   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:47.069235   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:47.069251   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:47.137186   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:47.137197   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:47.137220   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:47.206375   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:47.206397   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:49.735135   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:49.745127   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:49.745191   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:49.770237   45025 cri.go:89] found id: ""
	I1211 00:19:49.770250   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.770257   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:49.770262   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:49.770319   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:49.795789   45025 cri.go:89] found id: ""
	I1211 00:19:49.795803   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.795810   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:49.795815   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:49.795872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:49.825306   45025 cri.go:89] found id: ""
	I1211 00:19:49.825319   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.825326   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:49.825331   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:49.825388   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:49.855190   45025 cri.go:89] found id: ""
	I1211 00:19:49.855204   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.855211   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:49.855216   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:49.855281   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:49.881199   45025 cri.go:89] found id: ""
	I1211 00:19:49.881212   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.881219   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:49.881224   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:49.881280   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:49.906616   45025 cri.go:89] found id: ""
	I1211 00:19:49.906629   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.906636   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:49.906641   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:49.906698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:49.933814   45025 cri.go:89] found id: ""
	I1211 00:19:49.933828   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.933835   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:49.933842   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:49.933859   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:49.944994   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:49.945009   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:50.007164   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:50.007174   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:50.007184   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:50.077454   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:50.077472   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:50.110740   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:50.110757   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.683928   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:52.694104   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:52.694167   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:52.725399   45025 cri.go:89] found id: ""
	I1211 00:19:52.725413   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.725420   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:52.725425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:52.725483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:52.751850   45025 cri.go:89] found id: ""
	I1211 00:19:52.751863   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.751870   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:52.751875   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:52.751937   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:52.780571   45025 cri.go:89] found id: ""
	I1211 00:19:52.780584   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.780591   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:52.780595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:52.780653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:52.809728   45025 cri.go:89] found id: ""
	I1211 00:19:52.809741   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.809748   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:52.809753   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:52.809808   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:52.834891   45025 cri.go:89] found id: ""
	I1211 00:19:52.834904   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.834910   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:52.834915   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:52.835007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:52.861606   45025 cri.go:89] found id: ""
	I1211 00:19:52.861619   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.861626   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:52.861631   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:52.861688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:52.888101   45025 cri.go:89] found id: ""
	I1211 00:19:52.888115   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.888122   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:52.888130   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:52.888140   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.953090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:52.953108   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:52.964419   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:52.964435   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:53.034074   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:53.034091   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:53.034102   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:53.105399   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:53.105417   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.638422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:55.648339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:55.648396   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:55.678848   45025 cri.go:89] found id: ""
	I1211 00:19:55.678868   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.678876   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:55.678884   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:55.678953   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:55.718935   45025 cri.go:89] found id: ""
	I1211 00:19:55.718959   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.718987   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:55.718992   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:55.719061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:55.743738   45025 cri.go:89] found id: ""
	I1211 00:19:55.743751   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.743758   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:55.743763   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:55.743822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:55.769117   45025 cri.go:89] found id: ""
	I1211 00:19:55.769130   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.769137   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:55.769143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:55.769207   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:55.795500   45025 cri.go:89] found id: ""
	I1211 00:19:55.795529   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.795537   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:55.795542   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:55.795611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:55.824959   45025 cri.go:89] found id: ""
	I1211 00:19:55.824972   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.824979   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:55.824984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:55.825042   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:55.850737   45025 cri.go:89] found id: ""
	I1211 00:19:55.850750   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.850768   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:55.850776   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:55.850787   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.878584   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:55.878600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:55.943684   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:55.943701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:55.954898   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:55.954914   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:56.024872   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:56.024883   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:56.024893   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.594636   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:58.605403   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:58.605467   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:58.637164   45025 cri.go:89] found id: ""
	I1211 00:19:58.637178   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.637189   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:58.637194   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:58.637252   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:58.682644   45025 cri.go:89] found id: ""
	I1211 00:19:58.682657   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.682664   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:58.682672   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:58.682728   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:58.714474   45025 cri.go:89] found id: ""
	I1211 00:19:58.714488   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.714495   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:58.714500   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:58.714558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:58.745457   45025 cri.go:89] found id: ""
	I1211 00:19:58.745470   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.745484   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:58.745489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:58.745545   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:58.771678   45025 cri.go:89] found id: ""
	I1211 00:19:58.771691   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.771704   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:58.771710   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:58.771770   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:58.796493   45025 cri.go:89] found id: ""
	I1211 00:19:58.796507   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.796514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:58.796519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:58.796576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:58.821870   45025 cri.go:89] found id: ""
	I1211 00:19:58.821884   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.821892   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:58.821899   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:58.821909   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.894510   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:58.894537   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:58.927576   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:58.927595   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:58.994438   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:58.994455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:59.005360   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:59.005377   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:59.073100   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.573622   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:01.584703   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:01.584773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:01.612873   45025 cri.go:89] found id: ""
	I1211 00:20:01.612888   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.612895   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:01.612901   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:01.612964   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:01.641246   45025 cri.go:89] found id: ""
	I1211 00:20:01.641259   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.641267   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:01.641272   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:01.641330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:01.670560   45025 cri.go:89] found id: ""
	I1211 00:20:01.670574   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.670582   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:01.670587   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:01.670652   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:01.697783   45025 cri.go:89] found id: ""
	I1211 00:20:01.697797   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.697804   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:01.697809   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:01.697870   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:01.724991   45025 cri.go:89] found id: ""
	I1211 00:20:01.725005   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.725013   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:01.725019   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:01.725078   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:01.751948   45025 cri.go:89] found id: ""
	I1211 00:20:01.751961   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.751969   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:01.751976   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:01.752036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:01.782191   45025 cri.go:89] found id: ""
	I1211 00:20:01.782204   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.782211   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:01.782218   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:01.782228   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:01.849183   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:01.849203   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:01.863105   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:01.863127   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:01.948480   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.948490   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:01.948501   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:02.031526   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:02.031546   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.563706   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:04.573944   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:04.573999   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:04.604222   45025 cri.go:89] found id: ""
	I1211 00:20:04.604235   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.604242   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:04.604247   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:04.604308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:04.633340   45025 cri.go:89] found id: ""
	I1211 00:20:04.633353   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.633361   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:04.633365   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:04.633427   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:04.663258   45025 cri.go:89] found id: ""
	I1211 00:20:04.663289   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.663297   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:04.663302   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:04.663373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:04.690031   45025 cri.go:89] found id: ""
	I1211 00:20:04.690044   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.690051   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:04.690056   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:04.690112   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:04.716219   45025 cri.go:89] found id: ""
	I1211 00:20:04.716232   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.716240   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:04.716256   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:04.716317   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:04.742460   45025 cri.go:89] found id: ""
	I1211 00:20:04.742474   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.742481   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:04.742497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:04.742564   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:04.774107   45025 cri.go:89] found id: ""
	I1211 00:20:04.774121   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.774128   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:04.774136   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:04.774146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.806436   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:04.806453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:04.872547   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:04.872566   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:04.884075   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:04.884092   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:04.982628   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:04.982638   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:04.982650   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.551877   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:07.561860   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:07.561924   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:07.586162   45025 cri.go:89] found id: ""
	I1211 00:20:07.586175   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.586192   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:07.586198   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:07.586254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:07.611295   45025 cri.go:89] found id: ""
	I1211 00:20:07.611309   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.611316   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:07.611321   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:07.611377   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:07.637224   45025 cri.go:89] found id: ""
	I1211 00:20:07.637237   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.637245   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:07.637249   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:07.637306   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:07.666366   45025 cri.go:89] found id: ""
	I1211 00:20:07.666379   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.666386   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:07.666391   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:07.666451   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:07.691800   45025 cri.go:89] found id: ""
	I1211 00:20:07.691814   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.691822   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:07.691827   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:07.691885   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:07.717290   45025 cri.go:89] found id: ""
	I1211 00:20:07.717304   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.717321   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:07.717326   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:07.717382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:07.747011   45025 cri.go:89] found id: ""
	I1211 00:20:07.747024   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.747031   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:07.747039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:07.747048   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.816300   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:07.816318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:07.850783   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:07.850798   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:07.920354   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:07.920371   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:07.932012   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:07.932027   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:07.996529   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.496978   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:10.507125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:10.507193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:10.532780   45025 cri.go:89] found id: ""
	I1211 00:20:10.532794   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.532801   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:10.532807   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:10.532863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:10.558194   45025 cri.go:89] found id: ""
	I1211 00:20:10.558207   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.558214   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:10.558219   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:10.558277   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:10.583482   45025 cri.go:89] found id: ""
	I1211 00:20:10.583496   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.583503   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:10.583508   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:10.583566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:10.608826   45025 cri.go:89] found id: ""
	I1211 00:20:10.608840   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.608847   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:10.608851   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:10.608910   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:10.637533   45025 cri.go:89] found id: ""
	I1211 00:20:10.637548   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.637554   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:10.637559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:10.637620   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:10.662448   45025 cri.go:89] found id: ""
	I1211 00:20:10.662463   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.662471   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:10.662478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:10.662535   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:10.688164   45025 cri.go:89] found id: ""
	I1211 00:20:10.688187   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.688195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:10.688203   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:10.688213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:10.718946   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:10.718981   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:10.783972   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:10.783992   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:10.795392   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:10.795408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:10.862892   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.862901   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:10.862911   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.437541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:13.447617   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:13.447679   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:13.473117   45025 cri.go:89] found id: ""
	I1211 00:20:13.473131   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.473139   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:13.473144   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:13.473200   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:13.498616   45025 cri.go:89] found id: ""
	I1211 00:20:13.498629   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.498636   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:13.498641   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:13.498698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:13.525802   45025 cri.go:89] found id: ""
	I1211 00:20:13.525824   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.525832   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:13.525836   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:13.525904   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:13.552063   45025 cri.go:89] found id: ""
	I1211 00:20:13.552077   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.552084   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:13.552092   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:13.552153   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:13.576789   45025 cri.go:89] found id: ""
	I1211 00:20:13.576802   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.576809   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:13.576816   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:13.576872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:13.602028   45025 cri.go:89] found id: ""
	I1211 00:20:13.602042   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.602059   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:13.602065   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:13.602120   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:13.629268   45025 cri.go:89] found id: ""
	I1211 00:20:13.629282   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.629299   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:13.629307   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:13.629318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:13.694395   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:13.694413   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:13.705346   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:13.705362   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:13.771138   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:13.771148   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:13.771158   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.842879   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:13.842896   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:16.379425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:16.389574   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:16.389639   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:16.414634   45025 cri.go:89] found id: ""
	I1211 00:20:16.414647   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.414654   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:16.414659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:16.414721   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:16.441274   45025 cri.go:89] found id: ""
	I1211 00:20:16.441287   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.441293   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:16.441298   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:16.441352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:16.466318   45025 cri.go:89] found id: ""
	I1211 00:20:16.466331   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.466338   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:16.466343   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:16.466399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:16.492814   45025 cri.go:89] found id: ""
	I1211 00:20:16.492827   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.492834   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:16.492839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:16.492894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:16.518104   45025 cri.go:89] found id: ""
	I1211 00:20:16.518117   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.518125   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:16.518130   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:16.518193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:16.543245   45025 cri.go:89] found id: ""
	I1211 00:20:16.543260   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.543267   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:16.543272   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:16.543331   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:16.567767   45025 cri.go:89] found id: ""
	I1211 00:20:16.567781   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.567788   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:16.567795   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:16.567806   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:16.635880   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:16.635897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:16.647253   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:16.647269   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:16.711132   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:16.711143   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:16.711154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:16.781461   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:16.781479   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.312031   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:19.322411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:19.322469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:19.349102   45025 cri.go:89] found id: ""
	I1211 00:20:19.349116   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.349124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:19.349129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:19.349190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:19.373803   45025 cri.go:89] found id: ""
	I1211 00:20:19.373818   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.373825   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:19.373830   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:19.373891   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:19.402187   45025 cri.go:89] found id: ""
	I1211 00:20:19.402201   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.402208   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:19.402213   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:19.402274   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:19.427606   45025 cri.go:89] found id: ""
	I1211 00:20:19.427620   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.427628   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:19.427633   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:19.427693   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:19.452647   45025 cri.go:89] found id: ""
	I1211 00:20:19.452660   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.452667   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:19.452671   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:19.452732   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:19.482184   45025 cri.go:89] found id: ""
	I1211 00:20:19.482198   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.482205   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:19.482211   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:19.482266   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:19.508334   45025 cri.go:89] found id: ""
	I1211 00:20:19.508348   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.508355   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:19.508369   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:19.508379   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:19.582679   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:19.582703   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.613878   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:19.613897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:19.688185   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:19.688206   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:19.699902   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:19.699917   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:19.768799   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.269027   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:22.278950   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:22.279030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:22.303632   45025 cri.go:89] found id: ""
	I1211 00:20:22.303646   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.303653   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:22.303659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:22.303714   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:22.329589   45025 cri.go:89] found id: ""
	I1211 00:20:22.329602   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.329647   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:22.329653   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:22.329707   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:22.359724   45025 cri.go:89] found id: ""
	I1211 00:20:22.359737   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.359744   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:22.359749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:22.359806   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:22.385684   45025 cri.go:89] found id: ""
	I1211 00:20:22.385697   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.385704   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:22.385709   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:22.385768   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:22.411515   45025 cri.go:89] found id: ""
	I1211 00:20:22.411529   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.411536   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:22.411541   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:22.411601   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:22.437841   45025 cri.go:89] found id: ""
	I1211 00:20:22.437858   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.437865   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:22.437870   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:22.437926   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:22.462799   45025 cri.go:89] found id: ""
	I1211 00:20:22.462812   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.462819   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:22.462830   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:22.462840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:22.530683   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:22.530700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:22.541777   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:22.541792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:22.606464   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.606473   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:22.606484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:22.675683   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:22.675704   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:25.205679   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:25.215714   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:25.215772   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:25.240624   45025 cri.go:89] found id: ""
	I1211 00:20:25.240637   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.240644   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:25.240650   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:25.240704   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:25.266729   45025 cri.go:89] found id: ""
	I1211 00:20:25.266743   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.266761   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:25.266766   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:25.266833   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:25.292270   45025 cri.go:89] found id: ""
	I1211 00:20:25.292284   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.292291   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:25.292296   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:25.292352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:25.316988   45025 cri.go:89] found id: ""
	I1211 00:20:25.317013   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.317021   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:25.317027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:25.317094   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:25.342079   45025 cri.go:89] found id: ""
	I1211 00:20:25.342092   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.342100   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:25.342105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:25.342166   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:25.369363   45025 cri.go:89] found id: ""
	I1211 00:20:25.369376   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.369383   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:25.369388   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:25.369445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:25.395141   45025 cri.go:89] found id: ""
	I1211 00:20:25.395155   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.395166   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:25.395173   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:25.395183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:25.459743   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:25.459761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:25.470311   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:25.470325   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:25.537864   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:25.537874   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:25.537884   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:25.605782   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:25.605800   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:28.140709   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:28.152210   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:28.152270   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:28.192161   45025 cri.go:89] found id: ""
	I1211 00:20:28.192175   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.192182   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:28.192188   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:28.192254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:28.226107   45025 cri.go:89] found id: ""
	I1211 00:20:28.226121   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.226128   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:28.226133   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:28.226190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:28.252351   45025 cri.go:89] found id: ""
	I1211 00:20:28.252364   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.252371   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:28.252376   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:28.252437   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:28.277856   45025 cri.go:89] found id: ""
	I1211 00:20:28.277869   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.277876   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:28.277882   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:28.277942   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:28.303425   45025 cri.go:89] found id: ""
	I1211 00:20:28.303442   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.303449   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:28.303454   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:28.303533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:28.327952   45025 cri.go:89] found id: ""
	I1211 00:20:28.327965   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.327973   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:28.327978   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:28.328036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:28.352541   45025 cri.go:89] found id: ""
	I1211 00:20:28.352556   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.352563   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:28.352571   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:28.352581   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:28.417587   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:28.417606   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:28.428990   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:28.429005   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:28.493232   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:28.493242   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:28.493252   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:28.561239   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:28.561257   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.093955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:31.104422   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:31.104484   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:31.130996   45025 cri.go:89] found id: ""
	I1211 00:20:31.131011   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.131018   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:31.131023   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:31.131088   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:31.170443   45025 cri.go:89] found id: ""
	I1211 00:20:31.170457   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.170465   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:31.170470   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:31.170531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:31.204748   45025 cri.go:89] found id: ""
	I1211 00:20:31.204769   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.204777   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:31.204781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:31.204846   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:31.235573   45025 cri.go:89] found id: ""
	I1211 00:20:31.235587   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.235594   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:31.235606   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:31.235664   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:31.260669   45025 cri.go:89] found id: ""
	I1211 00:20:31.260683   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.260690   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:31.260695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:31.260753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:31.286253   45025 cri.go:89] found id: ""
	I1211 00:20:31.286267   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.286274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:31.286279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:31.286338   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:31.313885   45025 cri.go:89] found id: ""
	I1211 00:20:31.313903   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.313910   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:31.313917   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:31.313928   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:31.376250   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:31.376260   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:31.376271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:31.445930   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:31.445948   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.477909   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:31.477923   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:31.547558   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:31.547575   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.060343   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:34.071407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:34.071468   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:34.097367   45025 cri.go:89] found id: ""
	I1211 00:20:34.097381   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.097389   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:34.097394   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:34.097455   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:34.125233   45025 cri.go:89] found id: ""
	I1211 00:20:34.125246   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.125253   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:34.125258   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:34.125313   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:34.152711   45025 cri.go:89] found id: ""
	I1211 00:20:34.152724   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.152731   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:34.152735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:34.152797   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:34.183533   45025 cri.go:89] found id: ""
	I1211 00:20:34.183547   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.183553   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:34.183559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:34.183627   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:34.212367   45025 cri.go:89] found id: ""
	I1211 00:20:34.212379   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.212386   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:34.212392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:34.212450   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:34.239991   45025 cri.go:89] found id: ""
	I1211 00:20:34.240005   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.240012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:34.240017   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:34.240084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:34.265795   45025 cri.go:89] found id: ""
	I1211 00:20:34.265809   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.265816   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:34.265823   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:34.265833   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:34.335452   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:34.335471   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:34.366714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:34.366729   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:34.434761   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:34.434779   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.445767   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:34.445782   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:34.513054   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.014301   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:37.029619   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:37.029688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:37.061510   45025 cri.go:89] found id: ""
	I1211 00:20:37.061525   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.061533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:37.061539   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:37.061597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:37.087429   45025 cri.go:89] found id: ""
	I1211 00:20:37.087442   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.087449   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:37.087454   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:37.087513   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:37.113865   45025 cri.go:89] found id: ""
	I1211 00:20:37.113878   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.113885   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:37.113890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:37.113951   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:37.139634   45025 cri.go:89] found id: ""
	I1211 00:20:37.139647   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.139655   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:37.139659   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:37.139723   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:37.177513   45025 cri.go:89] found id: ""
	I1211 00:20:37.177527   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.177535   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:37.177540   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:37.177599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:37.207209   45025 cri.go:89] found id: ""
	I1211 00:20:37.207223   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.207230   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:37.207235   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:37.207291   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:37.235860   45025 cri.go:89] found id: ""
	I1211 00:20:37.235874   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.235880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:37.235888   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:37.235898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:37.302242   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:37.302260   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:37.313364   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:37.313380   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:37.383109   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.383119   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:37.383134   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:37.452480   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:37.452497   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:39.981534   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:39.992011   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:39.992074   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:40.037108   45025 cri.go:89] found id: ""
	I1211 00:20:40.037123   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.037131   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:40.037137   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:40.037205   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:40.073935   45025 cri.go:89] found id: ""
	I1211 00:20:40.073950   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.073958   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:40.073963   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:40.074024   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:40.103233   45025 cri.go:89] found id: ""
	I1211 00:20:40.103247   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.103255   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:40.103260   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:40.103324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:40.130384   45025 cri.go:89] found id: ""
	I1211 00:20:40.130398   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.130405   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:40.130411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:40.130482   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:40.168123   45025 cri.go:89] found id: ""
	I1211 00:20:40.168137   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.168143   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:40.168149   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:40.168209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:40.206729   45025 cri.go:89] found id: ""
	I1211 00:20:40.206743   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.206750   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:40.206755   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:40.206814   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:40.237917   45025 cri.go:89] found id: ""
	I1211 00:20:40.237930   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.237937   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:40.237945   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:40.237954   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:40.306231   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:40.306249   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:40.335237   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:40.335256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:40.407102   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:40.407124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:40.418948   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:40.418987   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:40.487059   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:42.987371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:42.997627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:42.997687   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:43.034834   45025 cri.go:89] found id: ""
	I1211 00:20:43.034847   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.034854   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:43.034858   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:43.034917   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:43.061014   45025 cri.go:89] found id: ""
	I1211 00:20:43.061028   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.061035   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:43.061040   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:43.061111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:43.086728   45025 cri.go:89] found id: ""
	I1211 00:20:43.086742   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.086749   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:43.086754   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:43.086815   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:43.112537   45025 cri.go:89] found id: ""
	I1211 00:20:43.112551   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.112557   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:43.112563   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:43.112619   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:43.138331   45025 cri.go:89] found id: ""
	I1211 00:20:43.138358   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.138365   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:43.138370   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:43.138440   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:43.177883   45025 cri.go:89] found id: ""
	I1211 00:20:43.177895   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.177902   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:43.177908   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:43.177976   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:43.208963   45025 cri.go:89] found id: ""
	I1211 00:20:43.208976   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.208984   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:43.208991   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:43.209001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:43.276100   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:43.276119   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:43.287251   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:43.287266   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:43.358374   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:43.358389   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:43.358399   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:43.430845   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:43.430863   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:45.960980   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:45.971128   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:45.971189   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:45.997483   45025 cri.go:89] found id: ""
	I1211 00:20:45.997497   45025 logs.go:282] 0 containers: []
	W1211 00:20:45.997504   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:45.997509   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:45.997566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:46.030243   45025 cri.go:89] found id: ""
	I1211 00:20:46.030257   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.030265   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:46.030280   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:46.030341   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:46.057812   45025 cri.go:89] found id: ""
	I1211 00:20:46.057826   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.057834   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:46.057839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:46.057896   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:46.094313   45025 cri.go:89] found id: ""
	I1211 00:20:46.094326   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.094334   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:46.094339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:46.094403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:46.120781   45025 cri.go:89] found id: ""
	I1211 00:20:46.120796   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.120803   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:46.120808   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:46.120867   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:46.153078   45025 cri.go:89] found id: ""
	I1211 00:20:46.153091   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.153099   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:46.153105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:46.153164   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:46.184025   45025 cri.go:89] found id: ""
	I1211 00:20:46.184038   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.184045   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:46.184052   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:46.184065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:46.195376   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:46.195391   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:46.264561   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:46.264571   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:46.264583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:46.334575   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:46.334592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:46.365686   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:46.365701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:48.932730   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:48.943221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:48.943289   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:48.970754   45025 cri.go:89] found id: ""
	I1211 00:20:48.970769   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.970775   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:48.970781   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:48.970851   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:48.998179   45025 cri.go:89] found id: ""
	I1211 00:20:48.998193   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.998200   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:48.998205   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:48.998265   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:49.027459   45025 cri.go:89] found id: ""
	I1211 00:20:49.027472   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.027485   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:49.027490   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:49.027554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:49.053666   45025 cri.go:89] found id: ""
	I1211 00:20:49.053693   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.053700   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:49.053705   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:49.053773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:49.080140   45025 cri.go:89] found id: ""
	I1211 00:20:49.080155   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.080162   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:49.080167   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:49.080223   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:49.106258   45025 cri.go:89] found id: ""
	I1211 00:20:49.106281   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.106289   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:49.106294   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:49.106362   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:49.131929   45025 cri.go:89] found id: ""
	I1211 00:20:49.131952   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.131960   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:49.131967   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:49.131978   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:49.216291   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:49.216315   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:49.247289   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:49.247308   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:49.319005   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:49.319026   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:49.330154   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:49.330171   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:49.399415   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:51.899678   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:51.910510   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:51.910571   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:51.941358   45025 cri.go:89] found id: ""
	I1211 00:20:51.941372   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.941379   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:51.941384   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:51.941441   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:51.972273   45025 cri.go:89] found id: ""
	I1211 00:20:51.972287   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.972295   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:51.972300   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:51.972357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:51.998172   45025 cri.go:89] found id: ""
	I1211 00:20:51.998184   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.998191   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:51.998197   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:51.998256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:52.028439   45025 cri.go:89] found id: ""
	I1211 00:20:52.028453   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.028460   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:52.028465   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:52.028526   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:52.060485   45025 cri.go:89] found id: ""
	I1211 00:20:52.060500   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.060508   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:52.060513   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:52.060574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:52.093990   45025 cri.go:89] found id: ""
	I1211 00:20:52.094005   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.094012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:52.094018   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:52.094084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:52.122577   45025 cri.go:89] found id: ""
	I1211 00:20:52.122592   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.122599   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:52.122606   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:52.122624   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:52.191378   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:52.191396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:52.203404   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:52.203421   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:52.272572   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:52.272582   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:52.272592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:52.340655   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:52.340672   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:54.871996   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:54.882238   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:54.882299   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:54.908417   45025 cri.go:89] found id: ""
	I1211 00:20:54.908430   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.908437   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:54.908442   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:54.908512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:54.937462   45025 cri.go:89] found id: ""
	I1211 00:20:54.937475   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.937482   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:54.937487   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:54.937547   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:54.965546   45025 cri.go:89] found id: ""
	I1211 00:20:54.965560   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.965567   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:54.965572   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:54.965629   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:54.991381   45025 cri.go:89] found id: ""
	I1211 00:20:54.991395   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.991403   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:54.991407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:54.991469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:55.023225   45025 cri.go:89] found id: ""
	I1211 00:20:55.023243   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.023251   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:55.023257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:55.023340   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:55.069033   45025 cri.go:89] found id: ""
	I1211 00:20:55.069049   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.069056   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:55.069062   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:55.069130   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:55.104401   45025 cri.go:89] found id: ""
	I1211 00:20:55.104417   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.104424   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:55.104432   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:55.104444   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:55.117919   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:55.117939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:55.207253   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:55.207264   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:55.207275   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:55.285978   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:55.286001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:55.318311   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:55.318327   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:57.883510   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:57.893407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:57.893478   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:57.918657   45025 cri.go:89] found id: ""
	I1211 00:20:57.918670   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.918677   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:57.918684   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:57.918739   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:57.944248   45025 cri.go:89] found id: ""
	I1211 00:20:57.944261   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.944268   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:57.944274   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:57.944337   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:57.969321   45025 cri.go:89] found id: ""
	I1211 00:20:57.969335   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.969342   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:57.969347   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:57.969403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:57.994466   45025 cri.go:89] found id: ""
	I1211 00:20:57.994482   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.994490   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:57.994495   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:57.994554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:58.021937   45025 cri.go:89] found id: ""
	I1211 00:20:58.021954   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.021962   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:58.021967   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:58.022033   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:58.048826   45025 cri.go:89] found id: ""
	I1211 00:20:58.048840   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.048848   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:58.048854   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:58.048912   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:58.077218   45025 cri.go:89] found id: ""
	I1211 00:20:58.077231   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.077239   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:58.077246   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:58.077256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:58.145681   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:58.145698   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:58.191796   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:58.191814   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:58.268737   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:58.268756   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:58.280057   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:58.280074   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:58.347775   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:00.848653   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:00.859447   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:00.859507   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:00.885107   45025 cri.go:89] found id: ""
	I1211 00:21:00.885123   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.885130   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:00.885136   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:00.885195   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:00.916160   45025 cri.go:89] found id: ""
	I1211 00:21:00.916174   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.916181   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:00.916186   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:00.916242   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:00.941904   45025 cri.go:89] found id: ""
	I1211 00:21:00.941918   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.941926   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:00.941931   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:00.941996   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:00.969553   45025 cri.go:89] found id: ""
	I1211 00:21:00.969566   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.969573   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:00.969579   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:00.969640   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:00.995856   45025 cri.go:89] found id: ""
	I1211 00:21:00.995869   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.995876   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:00.995881   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:00.995936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:01.023643   45025 cri.go:89] found id: ""
	I1211 00:21:01.023672   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.023679   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:01.023685   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:01.023753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:01.049959   45025 cri.go:89] found id: ""
	I1211 00:21:01.049972   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.049979   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:01.049986   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:01.049996   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:01.117206   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:01.117224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:01.129158   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:01.129174   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:01.221837   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:01.221848   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:01.221858   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:01.292030   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:01.292052   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:03.824471   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:03.834984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:03.835048   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:03.865620   45025 cri.go:89] found id: ""
	I1211 00:21:03.865633   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.865640   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:03.865646   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:03.865706   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:03.894960   45025 cri.go:89] found id: ""
	I1211 00:21:03.895000   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.895012   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:03.895018   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:03.895093   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:03.922002   45025 cri.go:89] found id: ""
	I1211 00:21:03.922016   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.922033   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:03.922039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:03.922114   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:03.949011   45025 cri.go:89] found id: ""
	I1211 00:21:03.949025   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.949032   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:03.949037   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:03.949104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:03.979941   45025 cri.go:89] found id: ""
	I1211 00:21:03.979955   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.979983   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:03.979988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:03.980056   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:04.005356   45025 cri.go:89] found id: ""
	I1211 00:21:04.005379   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.005386   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:04.005392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:04.005498   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:04.036172   45025 cri.go:89] found id: ""
	I1211 00:21:04.036193   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.036201   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:04.036210   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:04.036224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:04.075735   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:04.075754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:04.141955   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:04.141976   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:04.154375   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:04.154390   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:04.236732   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:04.236744   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:04.236754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:06.812855   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:06.823280   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:06.823348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:06.849675   45025 cri.go:89] found id: ""
	I1211 00:21:06.849689   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.849696   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:06.849701   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:06.849760   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:06.876012   45025 cri.go:89] found id: ""
	I1211 00:21:06.876026   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.876033   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:06.876038   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:06.876095   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:06.901644   45025 cri.go:89] found id: ""
	I1211 00:21:06.901658   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.901664   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:06.901669   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:06.901726   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:06.926863   45025 cri.go:89] found id: ""
	I1211 00:21:06.926877   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.926885   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:06.926890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:06.926946   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:06.956891   45025 cri.go:89] found id: ""
	I1211 00:21:06.956905   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.956912   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:06.956917   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:06.956978   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:06.981741   45025 cri.go:89] found id: ""
	I1211 00:21:06.981754   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.981762   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:06.981767   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:06.981826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:07.007640   45025 cri.go:89] found id: ""
	I1211 00:21:07.007653   45025 logs.go:282] 0 containers: []
	W1211 00:21:07.007660   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:07.007666   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:07.007678   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:07.076566   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:07.076583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:07.087895   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:07.087910   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:07.159453   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:07.159463   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:07.159474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:07.242834   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:07.242853   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:09.772607   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:09.782749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:09.782809   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:09.809021   45025 cri.go:89] found id: ""
	I1211 00:21:09.809035   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.809042   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:09.809048   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:09.809106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:09.837599   45025 cri.go:89] found id: ""
	I1211 00:21:09.837612   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.837619   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:09.837624   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:09.837681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:09.865754   45025 cri.go:89] found id: ""
	I1211 00:21:09.865767   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.865775   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:09.865780   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:09.865841   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:09.890922   45025 cri.go:89] found id: ""
	I1211 00:21:09.890936   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.890943   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:09.890948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:09.891034   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:09.916087   45025 cri.go:89] found id: ""
	I1211 00:21:09.916100   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.916108   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:09.916113   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:09.916169   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:09.941494   45025 cri.go:89] found id: ""
	I1211 00:21:09.941507   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.941514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:09.941520   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:09.941574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:09.967438   45025 cri.go:89] found id: ""
	I1211 00:21:09.967452   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.967460   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:09.967467   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:09.967478   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:10.042566   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:10.042577   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:10.042589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:10.114716   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:10.114734   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:10.147711   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:10.147727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:10.216212   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:10.216230   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:12.728208   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:12.738793   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:12.738852   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:12.765512   45025 cri.go:89] found id: ""
	I1211 00:21:12.765527   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.765534   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:12.765540   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:12.765599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:12.792241   45025 cri.go:89] found id: ""
	I1211 00:21:12.792254   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.792261   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:12.792266   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:12.792326   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:12.821945   45025 cri.go:89] found id: ""
	I1211 00:21:12.821959   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.821966   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:12.821971   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:12.822029   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:12.847567   45025 cri.go:89] found id: ""
	I1211 00:21:12.847581   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.847588   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:12.847593   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:12.847649   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:12.873684   45025 cri.go:89] found id: ""
	I1211 00:21:12.873699   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.873706   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:12.873711   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:12.873769   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:12.899211   45025 cri.go:89] found id: ""
	I1211 00:21:12.899225   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.899233   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:12.899241   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:12.899301   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:12.925366   45025 cri.go:89] found id: ""
	I1211 00:21:12.925380   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.925387   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:12.925395   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:12.925408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:12.992650   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:12.992667   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:13.004006   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:13.004021   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:13.070046   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:13.070055   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:13.070065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:13.137969   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:13.137986   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:15.678794   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:15.688954   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:15.689022   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:15.714099   45025 cri.go:89] found id: ""
	I1211 00:21:15.714113   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.714120   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:15.714125   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:15.714190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:15.738722   45025 cri.go:89] found id: ""
	I1211 00:21:15.738735   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.738742   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:15.738747   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:15.738801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:15.764238   45025 cri.go:89] found id: ""
	I1211 00:21:15.764251   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.764258   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:15.764269   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:15.764330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:15.789987   45025 cri.go:89] found id: ""
	I1211 00:21:15.790000   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.790007   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:15.790012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:15.790066   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:15.815536   45025 cri.go:89] found id: ""
	I1211 00:21:15.815549   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.815556   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:15.815567   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:15.815626   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:15.840404   45025 cri.go:89] found id: ""
	I1211 00:21:15.840424   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.840433   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:15.840438   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:15.840497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:15.865028   45025 cri.go:89] found id: ""
	I1211 00:21:15.865041   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.865048   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:15.865054   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:15.865064   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:15.930832   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:15.930850   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:15.942270   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:15.942285   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:16.008579   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:16.008589   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:16.008600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:16.086023   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:16.086047   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.616564   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:18.627177   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:18.627235   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:18.655749   45025 cri.go:89] found id: ""
	I1211 00:21:18.655763   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.655771   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:18.655776   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:18.655838   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:18.685932   45025 cri.go:89] found id: ""
	I1211 00:21:18.685946   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.685953   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:18.685958   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:18.686019   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:18.713761   45025 cri.go:89] found id: ""
	I1211 00:21:18.713775   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.713783   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:18.713788   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:18.713847   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:18.740458   45025 cri.go:89] found id: ""
	I1211 00:21:18.740472   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.740480   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:18.740485   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:18.740540   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:18.766011   45025 cri.go:89] found id: ""
	I1211 00:21:18.766025   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.766032   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:18.766036   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:18.766092   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:18.791387   45025 cri.go:89] found id: ""
	I1211 00:21:18.791401   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.791409   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:18.791414   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:18.791471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:18.817326   45025 cri.go:89] found id: ""
	I1211 00:21:18.817340   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.817347   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:18.817354   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:18.817366   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:18.885570   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:18.885581   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:18.885592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:18.953656   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:18.953674   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.981613   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:18.981629   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:19.048252   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:19.048271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.561008   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:21.571125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:21.571184   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:21.597491   45025 cri.go:89] found id: ""
	I1211 00:21:21.597505   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.597512   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:21.597520   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:21.597576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:21.623022   45025 cri.go:89] found id: ""
	I1211 00:21:21.623040   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.623047   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:21.623052   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:21.623109   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:21.648127   45025 cri.go:89] found id: ""
	I1211 00:21:21.648141   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.648148   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:21.648154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:21.648212   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:21.673563   45025 cri.go:89] found id: ""
	I1211 00:21:21.673577   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.673584   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:21.673589   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:21.673646   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:21.701744   45025 cri.go:89] found id: ""
	I1211 00:21:21.701757   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.701764   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:21.701769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:21.701830   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:21.727163   45025 cri.go:89] found id: ""
	I1211 00:21:21.727177   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.727184   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:21.727189   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:21.727247   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:21.753680   45025 cri.go:89] found id: ""
	I1211 00:21:21.753694   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.753702   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:21.753709   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:21.753720   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.764845   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:21.764862   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:21.825854   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:21.825865   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:21.825877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:21.895180   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:21.895198   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:21.923512   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:21.923530   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.495120   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:24.505627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:24.505701   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:24.532103   45025 cri.go:89] found id: ""
	I1211 00:21:24.532117   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.532124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:24.532129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:24.532183   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:24.561426   45025 cri.go:89] found id: ""
	I1211 00:21:24.561439   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.561447   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:24.561451   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:24.561509   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:24.591493   45025 cri.go:89] found id: ""
	I1211 00:21:24.591506   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.591514   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:24.591519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:24.591582   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:24.618513   45025 cri.go:89] found id: ""
	I1211 00:21:24.618527   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.618534   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:24.618539   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:24.618596   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:24.644876   45025 cri.go:89] found id: ""
	I1211 00:21:24.644890   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.644899   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:24.644904   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:24.644963   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:24.674148   45025 cri.go:89] found id: ""
	I1211 00:21:24.674161   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.674168   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:24.674174   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:24.674236   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:24.700184   45025 cri.go:89] found id: ""
	I1211 00:21:24.700198   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.700205   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:24.700212   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:24.700222   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.765329   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:24.765346   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:24.776593   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:24.776608   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:24.844320   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:24.844329   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:24.844342   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:24.912094   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:24.912111   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.443355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:27.454562   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:27.454628   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:27.480512   45025 cri.go:89] found id: ""
	I1211 00:21:27.480526   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.480533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:27.480538   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:27.480604   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:27.507028   45025 cri.go:89] found id: ""
	I1211 00:21:27.507041   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.507049   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:27.507054   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:27.507111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:27.533346   45025 cri.go:89] found id: ""
	I1211 00:21:27.533360   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.533367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:27.533372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:27.533435   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:27.563021   45025 cri.go:89] found id: ""
	I1211 00:21:27.563034   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.563042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:27.563047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:27.563105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:27.587813   45025 cri.go:89] found id: ""
	I1211 00:21:27.587831   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.587838   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:27.587843   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:27.587900   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:27.616925   45025 cri.go:89] found id: ""
	I1211 00:21:27.616938   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.616945   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:27.616951   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:27.617007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:27.642256   45025 cri.go:89] found id: ""
	I1211 00:21:27.642269   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.642276   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:27.642283   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:27.642294   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:27.653306   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:27.653326   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:27.716428   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:27.716438   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:27.716455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:27.783513   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:27.783533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.814010   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:27.814025   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:30.382748   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:30.393371   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:30.393432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:30.427609   45025 cri.go:89] found id: ""
	I1211 00:21:30.427623   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.427629   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:30.427635   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:30.427696   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:30.457893   45025 cri.go:89] found id: ""
	I1211 00:21:30.457907   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.457913   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:30.457918   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:30.457980   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:30.492222   45025 cri.go:89] found id: ""
	I1211 00:21:30.492234   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.492241   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:30.492246   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:30.492303   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:30.521511   45025 cri.go:89] found id: ""
	I1211 00:21:30.521525   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.521532   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:30.521537   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:30.521597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:30.547821   45025 cri.go:89] found id: ""
	I1211 00:21:30.547835   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.547842   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:30.547847   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:30.547906   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:30.572652   45025 cri.go:89] found id: ""
	I1211 00:21:30.572666   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.572675   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:30.572681   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:30.572737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:30.601878   45025 cri.go:89] found id: ""
	I1211 00:21:30.601906   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.601914   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:30.601921   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:30.601932   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:30.613084   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:30.613100   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:30.683127   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:30.683136   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:30.683146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:30.750689   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:30.750707   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:30.784168   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:30.784183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.353720   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:33.363733   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:33.363790   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:33.391890   45025 cri.go:89] found id: ""
	I1211 00:21:33.391904   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.391911   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:33.391917   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:33.391984   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:33.423803   45025 cri.go:89] found id: ""
	I1211 00:21:33.423816   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.423823   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:33.423828   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:33.423889   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:33.458122   45025 cri.go:89] found id: ""
	I1211 00:21:33.458135   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.458142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:33.458147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:33.458206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:33.485705   45025 cri.go:89] found id: ""
	I1211 00:21:33.485718   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.485725   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:33.485730   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:33.485786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:33.513596   45025 cri.go:89] found id: ""
	I1211 00:21:33.513609   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.513617   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:33.513622   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:33.513681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:33.539390   45025 cri.go:89] found id: ""
	I1211 00:21:33.539403   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.539412   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:33.539418   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:33.539474   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:33.564837   45025 cri.go:89] found id: ""
	I1211 00:21:33.564849   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.564856   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:33.564863   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:33.564873   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.629883   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:33.629902   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:33.641102   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:33.641118   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:33.708725   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:33.708736   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:33.708746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:33.777920   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:33.777939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.306840   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:36.318198   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:36.318256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:36.347923   45025 cri.go:89] found id: ""
	I1211 00:21:36.347936   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.347943   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:36.347948   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:36.348003   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:36.372908   45025 cri.go:89] found id: ""
	I1211 00:21:36.372921   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.372928   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:36.372934   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:36.372994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:36.398449   45025 cri.go:89] found id: ""
	I1211 00:21:36.398462   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.398470   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:36.398478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:36.398533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:36.438503   45025 cri.go:89] found id: ""
	I1211 00:21:36.438516   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.438523   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:36.438528   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:36.438585   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:36.468232   45025 cri.go:89] found id: ""
	I1211 00:21:36.468245   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.468253   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:36.468257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:36.468318   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:36.494076   45025 cri.go:89] found id: ""
	I1211 00:21:36.494089   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.494096   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:36.494101   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:36.494168   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:36.521654   45025 cri.go:89] found id: ""
	I1211 00:21:36.521668   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.521676   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:36.521689   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:36.521700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:36.590822   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:36.590840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.620876   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:36.620891   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:36.689379   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:36.689396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:36.700340   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:36.700355   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:36.768766   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:39.270429   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:39.280501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:39.280558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:39.308182   45025 cri.go:89] found id: ""
	I1211 00:21:39.308203   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.308212   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:39.308218   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:39.308278   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:39.334096   45025 cri.go:89] found id: ""
	I1211 00:21:39.334113   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.334123   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:39.334132   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:39.334203   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:39.360088   45025 cri.go:89] found id: ""
	I1211 00:21:39.360101   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.360108   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:39.360115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:39.360174   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:39.386315   45025 cri.go:89] found id: ""
	I1211 00:21:39.386328   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.386336   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:39.386341   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:39.386399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:39.418994   45025 cri.go:89] found id: ""
	I1211 00:21:39.419008   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.419015   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:39.419020   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:39.419081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:39.446027   45025 cri.go:89] found id: ""
	I1211 00:21:39.446040   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.446047   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:39.446052   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:39.446119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:39.474854   45025 cri.go:89] found id: ""
	I1211 00:21:39.474867   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.474880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:39.474888   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:39.474898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:39.548615   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:39.548635   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:39.577039   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:39.577058   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:39.643644   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:39.643662   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:39.654782   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:39.654797   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:39.721483   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.221753   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:42.234138   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:42.234209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:42.265615   45025 cri.go:89] found id: ""
	I1211 00:21:42.265631   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.265639   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:42.265645   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:42.265716   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:42.295341   45025 cri.go:89] found id: ""
	I1211 00:21:42.295357   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.295365   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:42.295371   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:42.295432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:42.324010   45025 cri.go:89] found id: ""
	I1211 00:21:42.324025   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.324032   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:42.324039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:42.324101   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:42.355998   45025 cri.go:89] found id: ""
	I1211 00:21:42.356012   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.356020   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:42.356025   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:42.356087   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:42.385254   45025 cri.go:89] found id: ""
	I1211 00:21:42.385267   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.385275   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:42.385279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:42.385379   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:42.418942   45025 cri.go:89] found id: ""
	I1211 00:21:42.418956   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.418986   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:42.418993   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:42.419049   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:42.446484   45025 cri.go:89] found id: ""
	I1211 00:21:42.446497   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.446504   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:42.446511   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:42.446522   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:42.521774   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:42.521792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:42.533107   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:42.533124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:42.601857   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.601867   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:42.601877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:42.670754   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:42.670773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.205036   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:45.223242   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:45.223325   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:45.290545   45025 cri.go:89] found id: ""
	I1211 00:21:45.290560   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.290567   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:45.290580   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:45.290653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:45.321549   45025 cri.go:89] found id: ""
	I1211 00:21:45.321562   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.321581   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:45.321587   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:45.321660   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:45.351332   45025 cri.go:89] found id: ""
	I1211 00:21:45.351345   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.351353   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:45.351358   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:45.351418   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:45.377195   45025 cri.go:89] found id: ""
	I1211 00:21:45.377208   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.377215   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:45.377221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:45.377284   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:45.414830   45025 cri.go:89] found id: ""
	I1211 00:21:45.414844   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.414852   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:45.414857   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:45.414922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:45.444982   45025 cri.go:89] found id: ""
	I1211 00:21:45.444996   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.445003   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:45.445008   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:45.445065   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:45.475344   45025 cri.go:89] found id: ""
	I1211 00:21:45.475358   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.475365   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:45.475372   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:45.475388   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:45.544982   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:45.545000   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.578028   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:45.578044   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:45.650334   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:45.650360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:45.661530   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:45.661547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:45.726146   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.226425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:48.236595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:48.236655   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:48.264517   45025 cri.go:89] found id: ""
	I1211 00:21:48.264531   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.264538   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:48.264544   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:48.264602   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:48.291335   45025 cri.go:89] found id: ""
	I1211 00:21:48.291349   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.291356   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:48.291361   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:48.291420   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:48.317975   45025 cri.go:89] found id: ""
	I1211 00:21:48.317996   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.318005   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:48.318010   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:48.318090   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:48.343743   45025 cri.go:89] found id: ""
	I1211 00:21:48.343757   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.343764   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:48.343769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:48.343839   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:48.370548   45025 cri.go:89] found id: ""
	I1211 00:21:48.370561   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.370568   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:48.370573   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:48.370633   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:48.398956   45025 cri.go:89] found id: ""
	I1211 00:21:48.398991   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.398999   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:48.399004   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:48.399081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:48.432879   45025 cri.go:89] found id: ""
	I1211 00:21:48.432892   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.432900   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:48.432908   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:48.432918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:48.514612   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:48.514631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:48.526574   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:48.526589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:48.594430   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.594439   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:48.594449   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:48.662467   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:48.662487   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:51.193260   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:51.203850   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:51.203909   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:51.229218   45025 cri.go:89] found id: ""
	I1211 00:21:51.229232   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.229240   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:51.229249   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:51.229307   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:51.255535   45025 cri.go:89] found id: ""
	I1211 00:21:51.255549   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.255556   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:51.255561   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:51.255617   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:51.281281   45025 cri.go:89] found id: ""
	I1211 00:21:51.281295   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.281302   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:51.281306   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:51.281366   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:51.305242   45025 cri.go:89] found id: ""
	I1211 00:21:51.305256   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.305263   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:51.305268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:51.305324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:51.330682   45025 cri.go:89] found id: ""
	I1211 00:21:51.330695   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.330712   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:51.330717   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:51.330786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:51.361324   45025 cri.go:89] found id: ""
	I1211 00:21:51.361338   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.361345   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:51.361351   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:51.361410   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:51.387177   45025 cri.go:89] found id: ""
	I1211 00:21:51.387191   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.387198   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:51.387205   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:51.387216   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:51.461910   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:51.461927   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:51.473746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:51.473761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:51.542962   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:51.542994   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:51.543008   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:51.611981   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:51.612003   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:54.140885   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:54.151154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:54.151216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:54.177398   45025 cri.go:89] found id: ""
	I1211 00:21:54.177412   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.177419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:54.177424   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:54.177483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:54.202665   45025 cri.go:89] found id: ""
	I1211 00:21:54.202679   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.202686   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:54.202691   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:54.202751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:54.228121   45025 cri.go:89] found id: ""
	I1211 00:21:54.228135   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.228142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:54.228147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:54.228206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:54.254699   45025 cri.go:89] found id: ""
	I1211 00:21:54.254713   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.254726   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:54.254732   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:54.254794   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:54.280912   45025 cri.go:89] found id: ""
	I1211 00:21:54.280926   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.280934   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:54.280939   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:54.281000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:54.309917   45025 cri.go:89] found id: ""
	I1211 00:21:54.309930   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.309937   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:54.309943   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:54.310000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:54.335081   45025 cri.go:89] found id: ""
	I1211 00:21:54.335094   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.335102   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:54.335110   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:54.335120   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:54.402799   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:54.402819   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:54.423966   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:54.423982   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:54.493676   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:54.493685   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:54.493695   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:54.562184   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:54.562202   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.095145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:57.105735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:57.105793   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:57.137586   45025 cri.go:89] found id: ""
	I1211 00:21:57.137600   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.137607   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:57.137612   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:57.137669   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:57.162960   45025 cri.go:89] found id: ""
	I1211 00:21:57.162997   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.163004   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:57.163009   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:57.163068   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:57.189960   45025 cri.go:89] found id: ""
	I1211 00:21:57.189982   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.189989   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:57.189994   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:57.190059   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:57.215046   45025 cri.go:89] found id: ""
	I1211 00:21:57.215059   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.215067   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:57.215072   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:57.215129   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:57.239646   45025 cri.go:89] found id: ""
	I1211 00:21:57.239659   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.239678   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:57.239682   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:57.239737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:57.264818   45025 cri.go:89] found id: ""
	I1211 00:21:57.264832   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.264839   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:57.264844   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:57.264913   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:57.290063   45025 cri.go:89] found id: ""
	I1211 00:21:57.290076   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.290083   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:57.290090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:57.290103   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:57.300820   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:57.300834   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:57.366226   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:57.366236   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:57.366246   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:57.435439   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:57.435458   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.464292   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:57.464311   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.034825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:00.107263   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:22:00.107592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:22:00.209037   45025 cri.go:89] found id: ""
	I1211 00:22:00.209052   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.209060   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:22:00.209065   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:22:00.209139   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:22:00.259397   45025 cri.go:89] found id: ""
	I1211 00:22:00.259413   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.259420   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:22:00.259426   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:22:00.259499   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:22:00.300996   45025 cri.go:89] found id: ""
	I1211 00:22:00.301011   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.301020   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:22:00.301026   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:22:00.301121   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:22:00.355749   45025 cri.go:89] found id: ""
	I1211 00:22:00.355766   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.355775   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:22:00.355782   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:22:00.355863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:22:00.397265   45025 cri.go:89] found id: ""
	I1211 00:22:00.397279   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.397287   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:22:00.397292   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:22:00.397357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:22:00.431985   45025 cri.go:89] found id: ""
	I1211 00:22:00.432000   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.432008   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:22:00.432014   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:22:00.432079   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:22:00.475122   45025 cri.go:89] found id: ""
	I1211 00:22:00.475138   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.475145   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:22:00.475154   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:22:00.475165   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.544019   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:22:00.544039   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:22:00.556109   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:22:00.556126   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:22:00.625124   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:22:00.625135   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:22:00.625146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:22:00.693368   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:22:00.693387   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:22:03.226119   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:03.236558   45025 kubeadm.go:602] duration metric: took 4m3.502420888s to restartPrimaryControlPlane
	W1211 00:22:03.236621   45025 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 00:22:03.236698   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:22:03.653513   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:22:03.666451   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:22:03.674394   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:22:03.674497   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:22:03.682496   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:22:03.682506   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:22:03.682556   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:22:03.690253   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:22:03.690312   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:22:03.697814   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:22:03.705532   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:22:03.705584   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:22:03.712909   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.720642   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:22:03.720704   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.728085   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:22:03.735639   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:22:03.735694   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:22:03.743458   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:22:03.864690   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:22:03.865125   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:22:03.931571   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:26:05.371070   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1211 00:26:05.371093   45025 kubeadm.go:319] 
	I1211 00:26:05.371179   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:26:05.375684   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.375734   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:05.375839   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:05.375903   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:05.375950   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:05.375995   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:05.376042   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:05.376088   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:05.376135   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:05.376181   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:05.376229   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:05.376273   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:05.376319   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:05.376364   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:05.376435   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:05.376530   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:05.376618   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:05.376680   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:05.379737   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:05.379839   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:05.379918   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:05.380012   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:05.380083   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:05.380156   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:05.380207   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:05.380283   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:05.380352   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:05.380433   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:05.380508   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:05.380558   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:05.380610   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:05.380656   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:05.380709   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:05.380759   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:05.380821   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:05.380871   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:05.380957   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:05.381029   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:05.383945   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:05.384057   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:05.384159   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:05.384228   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:05.384331   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:05.384422   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:05.384548   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:05.384657   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:05.384704   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:05.384857   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:05.384973   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:26:05.385047   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001182146s
	I1211 00:26:05.385051   45025 kubeadm.go:319] 
	I1211 00:26:05.385122   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:26:05.385153   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:26:05.385275   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:26:05.385279   45025 kubeadm.go:319] 
	I1211 00:26:05.385390   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:26:05.385422   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:26:05.385452   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:26:05.385461   45025 kubeadm.go:319] 
	W1211 00:26:05.385565   45025 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182146s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1211 00:26:05.385656   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:26:05.805014   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:26:05.817222   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:26:05.817275   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:26:05.825148   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:26:05.825157   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:26:05.825207   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:26:05.832932   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:26:05.832991   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:26:05.840249   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:26:05.848087   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:26:05.848149   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:26:05.855944   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.863906   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:26:05.863960   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.871464   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:26:05.879062   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:26:05.879116   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:26:05.886444   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:26:05.923722   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.924046   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:06.002092   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:06.002152   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:06.002191   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:06.002233   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:06.002283   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:06.002332   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:06.002377   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:06.002429   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:06.002486   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:06.002528   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:06.002578   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:06.002626   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:06.076323   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:06.076462   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:06.076570   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:06.087446   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:06.092847   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:06.092964   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:06.093051   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:06.093134   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:06.093195   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:06.093273   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:06.093327   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:06.093390   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:06.093452   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:06.093529   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:06.093602   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:06.093639   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:06.093696   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:06.504239   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:06.701840   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:07.114481   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:07.226723   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:07.349377   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:07.350330   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:07.353007   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:07.356354   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:07.356511   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:07.356601   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:07.356672   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:07.373379   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:07.373693   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:07.381535   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:07.381916   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:07.382096   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:07.509380   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:07.509514   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:30:07.509220   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00004985s
	I1211 00:30:07.509346   45025 kubeadm.go:319] 
	I1211 00:30:07.509429   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:30:07.509464   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:30:07.509569   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:30:07.509574   45025 kubeadm.go:319] 
	I1211 00:30:07.509677   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:30:07.509708   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:30:07.509737   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:30:07.509740   45025 kubeadm.go:319] 
	I1211 00:30:07.513952   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:30:07.514370   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:30:07.514477   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:30:07.514741   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1211 00:30:07.514745   45025 kubeadm.go:319] 
	I1211 00:30:07.514828   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:30:07.514885   45025 kubeadm.go:403] duration metric: took 12m7.817411267s to StartCluster
	I1211 00:30:07.514914   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:30:07.514994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:30:07.541269   45025 cri.go:89] found id: ""
	I1211 00:30:07.541283   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.541291   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:30:07.541299   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:30:07.541373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:30:07.568371   45025 cri.go:89] found id: ""
	I1211 00:30:07.568385   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.568392   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:30:07.568397   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:30:07.568452   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:30:07.593463   45025 cri.go:89] found id: ""
	I1211 00:30:07.593477   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.593484   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:30:07.593489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:30:07.593551   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:30:07.617718   45025 cri.go:89] found id: ""
	I1211 00:30:07.617732   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.617739   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:30:07.617746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:30:07.617801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:30:07.644176   45025 cri.go:89] found id: ""
	I1211 00:30:07.644190   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.644197   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:30:07.644202   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:30:07.644260   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:30:07.673956   45025 cri.go:89] found id: ""
	I1211 00:30:07.673970   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.673977   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:30:07.673982   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:30:07.674040   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:30:07.699591   45025 cri.go:89] found id: ""
	I1211 00:30:07.699605   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.699612   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:30:07.699619   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:30:07.699631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:30:07.710731   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:30:07.710746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:30:07.782904   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:30:07.782915   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:30:07.782925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:30:07.853292   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:30:07.853310   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:30:07.882071   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:30:07.882089   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 00:30:07.951740   45025 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1211 00:30:07.951780   45025 out.go:285] * 
	W1211 00:30:07.951888   45025 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.951950   45025 out.go:285] * 
	W1211 00:30:07.954090   45025 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:30:07.959721   45025 out.go:203] 
	W1211 00:30:07.962947   45025 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.963287   45025 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1211 00:30:07.963357   45025 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1211 00:30:07.966374   45025 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559941409Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559978333Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560024989Z" level=info msg="Create NRI interface"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560126324Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560135908Z" level=info msg="runtime interface created"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560147707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560154386Z" level=info msg="runtime interface starting up..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560161025Z" level=info msg="starting plugins..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560173825Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560247985Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:17:58 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.935283532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6de6e87e-5991-43bc-b331-3c4da3939cd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936110736Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ed5fc17-8833-4a00-b49a-175298f161c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936663858Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5386bae8-3763-43a3-8e84-b7f98f5b64ad name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937146602Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bb2a1f8a-e043-498b-9aaf-3f590536bef8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937597116Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=46623a2c-7e86-46f1-9f50-faf880a0f7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938029611Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aacc6a08-88ba-4e77-9e82-199f5f521e79 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938428834Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1070a90a-4ba9-466d-bf22-501c564282df name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.079523143Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a3e30ef-9eb4-44b7-80b3-789735758754 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080212934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1702c38-afbc-48d3-aaa7-dbad7d98554e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080781133Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4a323cb1-ab88-481e-9cee-f539f47c462d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.081259674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11defec5-3e05-48c6-9020-9fbe1396c100 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08179049Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=befd3141-5ed6-4610-bc01-9a813a131605 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08229226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=882be104-d73f-4553-a30a-8e88aacff392 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.082743281Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c95dd536-8fa6-4a4b-9d2c-8647b294d5c0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:30:11.297961   21399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:11.298829   21399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:11.300387   21399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:11.300959   21399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:11.302550   21399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:30:11 up 41 min,  0 user,  load average: 0.24, 0.29, 0.41
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:30:08 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:30:09 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 11 00:30:09 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:09 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:09 functional-786978 kubelet[21272]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:09 functional-786978 kubelet[21272]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:09 functional-786978 kubelet[21272]: E1211 00:30:09.467051   21272 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:30:09 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:30:09 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:30:10 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 11 00:30:10 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:10 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:10 functional-786978 kubelet[21292]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:10 functional-786978 kubelet[21292]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:10 functional-786978 kubelet[21292]: E1211 00:30:10.217106   21292 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:30:10 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:30:10 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:30:10 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 11 00:30:10 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:10 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:30:10 functional-786978 kubelet[21314]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:10 functional-786978 kubelet[21314]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:30:10 functional-786978 kubelet[21314]: E1211 00:30:10.962600   21314 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:30:10 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:30:10 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (341.517841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-786978 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-786978 apply -f testdata/invalidsvc.yaml: exit status 1 (57.73456ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-786978 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-786978 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-786978 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-786978 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-786978 --alsologtostderr -v=1] stderr:
I1211 00:32:14.068585   62396 out.go:360] Setting OutFile to fd 1 ...
I1211 00:32:14.068717   62396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:14.068727   62396 out.go:374] Setting ErrFile to fd 2...
I1211 00:32:14.068734   62396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:14.069016   62396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:32:14.069274   62396 mustload.go:66] Loading cluster: functional-786978
I1211 00:32:14.069710   62396 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:14.070201   62396 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:32:14.088870   62396 host.go:66] Checking if "functional-786978" exists ...
I1211 00:32:14.089188   62396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1211 00:32:14.145772   62396 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:14.135979609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1211 00:32:14.145912   62396 api_server.go:166] Checking apiserver status ...
I1211 00:32:14.145982   62396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1211 00:32:14.146028   62396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:32:14.166790   62396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
W1211 00:32:14.276765   62396 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1211 00:32:14.280090   62396 out.go:179] * The control-plane node functional-786978 apiserver is not running: (state=Stopped)
I1211 00:32:14.282945   62396 out.go:179]   To start a cluster, run: "minikube start -p functional-786978"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (310.4892ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-786978 service hello-node --url                                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001:/mount-9p --alsologtostderr -v=1              │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh -- ls -la /mount-9p                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh cat /mount-9p/test-1765413124133681473                                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh sudo umount -f /mount-9p                                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1860978438/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh -- ls -la /mount-9p                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh sudo umount -f /mount-9p                                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount1 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount2 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount3 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount1                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount1                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh findmnt -T /mount2                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh findmnt -T /mount3                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ mount     │ -p functional-786978 --kill=true                                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-786978 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-786978 --alsologtostderr -v=1                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:32:13
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:32:13.818690   62323 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:32:13.818874   62323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.818903   62323 out.go:374] Setting ErrFile to fd 2...
	I1211 00:32:13.818924   62323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.819236   62323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:32:13.819622   62323 out.go:368] Setting JSON to false
	I1211 00:32:13.820496   62323 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2620,"bootTime":1765410514,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:32:13.820586   62323 start.go:143] virtualization:  
	I1211 00:32:13.823923   62323 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:32:13.826856   62323 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:32:13.826920   62323 notify.go:221] Checking for updates...
	I1211 00:32:13.832561   62323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:32:13.835419   62323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:32:13.838258   62323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:32:13.841065   62323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:32:13.843962   62323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:32:13.847347   62323 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:32:13.847919   62323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:32:13.879826   62323 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:32:13.879943   62323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:13.937210   62323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.928080673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:13.937315   62323 docker.go:319] overlay module found
	I1211 00:32:13.940499   62323 out.go:179] * Using the docker driver based on existing profile
	I1211 00:32:13.943472   62323 start.go:309] selected driver: docker
	I1211 00:32:13.943512   62323 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:13.943621   62323 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:32:13.943727   62323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:14.007488   62323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.999061697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:14.007934   62323 cni.go:84] Creating CNI manager for ""
	I1211 00:32:14.008001   62323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:32:14.008045   62323 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:14.012782   62323 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559941409Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559978333Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560024989Z" level=info msg="Create NRI interface"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560126324Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560135908Z" level=info msg="runtime interface created"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560147707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560154386Z" level=info msg="runtime interface starting up..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560161025Z" level=info msg="starting plugins..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560173825Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560247985Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:17:58 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.935283532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6de6e87e-5991-43bc-b331-3c4da3939cd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936110736Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ed5fc17-8833-4a00-b49a-175298f161c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936663858Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5386bae8-3763-43a3-8e84-b7f98f5b64ad name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937146602Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bb2a1f8a-e043-498b-9aaf-3f590536bef8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937597116Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=46623a2c-7e86-46f1-9f50-faf880a0f7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938029611Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aacc6a08-88ba-4e77-9e82-199f5f521e79 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938428834Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1070a90a-4ba9-466d-bf22-501c564282df name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.079523143Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a3e30ef-9eb4-44b7-80b3-789735758754 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080212934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1702c38-afbc-48d3-aaa7-dbad7d98554e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080781133Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4a323cb1-ab88-481e-9cee-f539f47c462d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.081259674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11defec5-3e05-48c6-9020-9fbe1396c100 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08179049Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=befd3141-5ed6-4610-bc01-9a813a131605 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08229226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=882be104-d73f-4553-a30a-8e88aacff392 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.082743281Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c95dd536-8fa6-4a4b-9d2c-8647b294d5c0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:32:15.334273   23466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:15.334658   23466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:15.336191   23466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:15.336492   23466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:15.337984   23466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:32:15 up 43 min,  0 user,  load average: 0.76, 0.43, 0.44
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:32:12 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:13 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1128.
	Dec 11 00:32:13 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:13 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:13 functional-786978 kubelet[23348]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:13 functional-786978 kubelet[23348]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:13 functional-786978 kubelet[23348]: E1211 00:32:13.465448   23348 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:13 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:13 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 11 00:32:14 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:14 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:14 functional-786978 kubelet[23353]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:14 functional-786978 kubelet[23353]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:14 functional-786978 kubelet[23353]: E1211 00:32:14.211102   23353 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 11 00:32:14 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:14 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:14 functional-786978 kubelet[23385]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:14 functional-786978 kubelet[23385]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:14 functional-786978 kubelet[23385]: E1211 00:32:14.956953   23385 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (343.886727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 status: exit status 2 (320.307481ms)

                                                
                                                
-- stdout --
	functional-786978
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-786978 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (318.583807ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-786978 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 status -o json: exit status 2 (336.723102ms)

                                                
                                                
-- stdout --
	{"Name":"functional-786978","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-786978 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (317.580337ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-786978 service list                                                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:31 UTC │                     │
	│ service │ functional-786978 service list -o json                                                                                                              │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:31 UTC │                     │
	│ service │ functional-786978 service --namespace=default --https --url hello-node                                                                              │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:31 UTC │                     │
	│ service │ functional-786978 service hello-node --url --format={{.IP}}                                                                                         │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:31 UTC │                     │
	│ service │ functional-786978 service hello-node --url                                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001:/mount-9p --alsologtostderr -v=1              │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh -- ls -la /mount-9p                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh cat /mount-9p/test-1765413124133681473                                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-786978 ssh sudo umount -f /mount-9p                                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1860978438/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh -- ls -la /mount-9p                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh sudo umount -f /mount-9p                                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount1 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount2 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount3 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-786978 ssh findmnt -T /mount1                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-786978 ssh findmnt -T /mount1                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh findmnt -T /mount2                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh     │ functional-786978 ssh findmnt -T /mount3                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ mount   │ -p functional-786978 --kill=true                                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:17:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:17:55.340423   45025 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:17:55.340537   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340541   45025 out.go:374] Setting ErrFile to fd 2...
	I1211 00:17:55.340544   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340791   45025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:17:55.341139   45025 out.go:368] Setting JSON to false
	I1211 00:17:55.342235   45025 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1762,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:17:55.342290   45025 start.go:143] virtualization:  
	I1211 00:17:55.345626   45025 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:17:55.349437   45025 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:17:55.349518   45025 notify.go:221] Checking for updates...
	I1211 00:17:55.355612   45025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:17:55.358489   45025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:17:55.361319   45025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:17:55.364268   45025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:17:55.367246   45025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:17:55.370742   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:55.370850   45025 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:17:55.397690   45025 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:17:55.397801   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.502686   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.493021097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.502775   45025 docker.go:319] overlay module found
	I1211 00:17:55.506026   45025 out.go:179] * Using the docker driver based on existing profile
	I1211 00:17:55.508857   45025 start.go:309] selected driver: docker
	I1211 00:17:55.508866   45025 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.508963   45025 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:17:55.509064   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.563622   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.55460881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.564041   45025 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 00:17:55.564074   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:55.564121   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:55.564168   45025 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.567337   45025 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:17:55.570124   45025 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:17:55.572957   45025 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:17:55.575721   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:55.575758   45025 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:17:55.575767   45025 cache.go:65] Caching tarball of preloaded images
	I1211 00:17:55.575808   45025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:17:55.575848   45025 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:17:55.575857   45025 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:17:55.575972   45025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:17:55.595069   45025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:17:55.595078   45025 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:17:55.595099   45025 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:17:55.595134   45025 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:17:55.595195   45025 start.go:364] duration metric: took 45.113µs to acquireMachinesLock for "functional-786978"
	I1211 00:17:55.595213   45025 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:17:55.595217   45025 fix.go:54] fixHost starting: 
	I1211 00:17:55.595484   45025 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:17:55.612234   45025 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:17:55.612254   45025 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:17:55.615553   45025 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:17:55.615576   45025 machine.go:94] provisionDockerMachine start ...
	I1211 00:17:55.615650   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.633023   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.633331   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.633337   45025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:17:55.782629   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.782643   45025 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:17:55.782717   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.800268   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.800560   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.800569   45025 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:17:55.960068   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.960134   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.979369   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.979668   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.979683   45025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:17:56.131539   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:17:56.131559   45025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:17:56.131581   45025 ubuntu.go:190] setting up certificates
	I1211 00:17:56.131589   45025 provision.go:84] configureAuth start
	I1211 00:17:56.131663   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:56.153195   45025 provision.go:143] copyHostCerts
	I1211 00:17:56.153275   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:17:56.153283   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:17:56.153368   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:17:56.153542   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:17:56.153546   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:17:56.153590   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:17:56.153677   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:17:56.153682   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:17:56.153707   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:17:56.153777   45025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:17:56.467494   45025 provision.go:177] copyRemoteCerts
	I1211 00:17:56.467553   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:17:56.467596   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.484090   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:56.587917   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:17:56.605865   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 00:17:56.622832   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:17:56.639884   45025 provision.go:87] duration metric: took 508.274173ms to configureAuth
	I1211 00:17:56.639901   45025 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:17:56.640097   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:56.640201   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.656951   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:56.657259   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:56.657272   45025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:17:57.016039   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:17:57.016056   45025 machine.go:97] duration metric: took 1.400473029s to provisionDockerMachine
	I1211 00:17:57.016068   45025 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:17:57.016080   45025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:17:57.016152   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:17:57.016210   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.035864   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.138938   45025 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:17:57.142378   45025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:17:57.142395   45025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:17:57.142405   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:17:57.142462   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:17:57.142546   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:17:57.142617   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:17:57.142658   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:17:57.149965   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:57.167412   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:17:57.184830   45025 start.go:296] duration metric: took 168.748285ms for postStartSetup
	I1211 00:17:57.184913   45025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:17:57.184954   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.203305   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.304245   45025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:17:57.309118   45025 fix.go:56] duration metric: took 1.713893936s for fixHost
	I1211 00:17:57.309133   45025 start.go:83] releasing machines lock for "functional-786978", held for 1.713931903s
	I1211 00:17:57.309206   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:57.326163   45025 ssh_runner.go:195] Run: cat /version.json
	I1211 00:17:57.326207   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.326441   45025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:17:57.326492   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.346150   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.355283   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.447048   45025 ssh_runner.go:195] Run: systemctl --version
	I1211 00:17:57.543733   45025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:17:57.583708   45025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 00:17:57.588962   45025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:17:57.589026   45025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:17:57.598123   45025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:17:57.598147   45025 start.go:496] detecting cgroup driver to use...
	I1211 00:17:57.598178   45025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:17:57.598242   45025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:17:57.616553   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:17:57.632037   45025 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:17:57.632116   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:17:57.648871   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:17:57.662555   45025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:17:57.780641   45025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:17:57.896253   45025 docker.go:234] disabling docker service ...
	I1211 00:17:57.896308   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:17:57.910709   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:17:57.923903   45025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:17:58.032234   45025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:17:58.154255   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:17:58.166925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:17:58.180565   45025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:17:58.180619   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.189311   45025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:17:58.189376   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.198596   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.207202   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.215908   45025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:17:58.223742   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.232864   45025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.241359   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.249993   45025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:17:58.257330   45025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:17:58.264525   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.395006   45025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:17:58.567132   45025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:17:58.567191   45025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:17:58.572106   45025 start.go:564] Will wait 60s for crictl version
	I1211 00:17:58.572166   45025 ssh_runner.go:195] Run: which crictl
	I1211 00:17:58.576600   45025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:17:58.605345   45025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:17:58.605434   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.635482   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.670505   45025 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:17:58.673486   45025 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:17:58.691254   45025 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:17:58.698413   45025 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1211 00:17:58.701098   45025 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:17:58.701227   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:58.701291   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.741056   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.741070   45025 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:17:58.741127   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.766313   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.766324   45025 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:17:58.766330   45025 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:17:58.766420   45025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:17:58.766498   45025 ssh_runner.go:195] Run: crio config
	I1211 00:17:58.831179   45025 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1211 00:17:58.831214   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:58.831224   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:58.831240   45025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:17:58.831262   45025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:17:58.831383   45025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:17:58.831452   45025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:17:58.839023   45025 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:17:58.839084   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:17:58.846528   45025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:17:58.859010   45025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:17:58.871952   45025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1211 00:17:58.884395   45025 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:17:58.888346   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.999004   45025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:17:59.014620   45025 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:17:59.014632   45025 certs.go:195] generating shared ca certs ...
	I1211 00:17:59.014647   45025 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:17:59.014834   45025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:17:59.014887   45025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:17:59.014894   45025 certs.go:257] generating profile certs ...
	I1211 00:17:59.015111   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:17:59.015168   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:17:59.015206   45025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:17:59.015330   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:17:59.015361   45025 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:17:59.015369   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:17:59.015399   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:17:59.015424   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:17:59.015449   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:17:59.015495   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:59.016236   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:17:59.036319   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:17:59.054207   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:17:59.085140   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:17:59.102589   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:17:59.119619   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:17:59.137775   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:17:59.155046   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:17:59.173200   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:17:59.191371   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:17:59.208847   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:17:59.225559   45025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:17:59.238258   45025 ssh_runner.go:195] Run: openssl version
	I1211 00:17:59.244279   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.251482   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:17:59.258806   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262560   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262615   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.303500   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:17:59.310986   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.318422   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:17:59.325839   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329190   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329239   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.369865   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:17:59.377731   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.385365   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:17:59.392850   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396464   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396534   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.437551   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:17:59.445097   45025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:17:59.449099   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:17:59.490493   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:17:59.531562   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:17:59.572726   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:17:59.613479   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:17:59.656606   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:17:59.697483   45025 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:59.697558   45025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:17:59.697631   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.726147   45025 cri.go:89] found id: ""
	I1211 00:17:59.726208   45025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:17:59.734119   45025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:17:59.734129   45025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:17:59.734181   45025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:17:59.741669   45025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.742193   45025 kubeconfig.go:125] found "functional-786978" server: "https://192.168.49.2:8441"
	I1211 00:17:59.743487   45025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:17:59.751799   45025 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-11 00:03:23.654512319 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-11 00:17:58.880060835 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1211 00:17:59.751819   45025 kubeadm.go:1161] stopping kube-system containers ...
	I1211 00:17:59.751836   45025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1211 00:17:59.751895   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.779633   45025 cri.go:89] found id: ""
	I1211 00:17:59.779698   45025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 00:17:59.796551   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:17:59.805010   45025 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 11 00:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 11 00:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 11 00:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 11 00:07 /etc/kubernetes/scheduler.conf
	
	I1211 00:17:59.805070   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:17:59.813093   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:17:59.820917   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.820973   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:17:59.828623   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.836494   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.836548   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.843945   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:17:59.851499   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.851553   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:17:59.859289   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:17:59.867193   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:17:59.916974   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.185880   45025 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.268883094s)
	I1211 00:18:02.185949   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.399533   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.467551   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.514148   45025 api_server.go:52] waiting for apiserver process to appear ...
	I1211 00:18:02.514234   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.014347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.515068   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.014554   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.515116   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.016511   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.515100   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.017684   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.515326   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.014433   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.515145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.014543   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.514950   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.015735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.514456   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.015825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.514630   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.015335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.514451   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.014804   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.514494   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.015458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.514452   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.014884   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.514333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.022420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.515034   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.017224   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.514464   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.015399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.514329   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.015271   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.017520   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.514376   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.017541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.515013   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.017761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.514358   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.014403   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.514344   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.017371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.515172   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.016422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.514490   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.020263   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.514922   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.014789   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.514345   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.015761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.514955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.018541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.514310   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.014448   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.514337   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.018852   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.515041   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.020888   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.514298   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.022333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.515045   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.014735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.514347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.017953   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.515070   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.015196   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.514355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.014375   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.514335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.014528   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.514323   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.014416   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.515174   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.014438   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.514458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.021545   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.514947   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.016088   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.514879   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.014943   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.514386   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.016904   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.515352   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.015231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.514894   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.014476   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.514778   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.016439   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.515114   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.014420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.514853   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.016610   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.514436   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.014585   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.514442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.014533   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.514763   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.016122   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.514418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.015418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.014702   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.515080   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.015415   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.514399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.016231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.514627   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.015154   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.515225   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.020324   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.514495   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.015016   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.514389   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.018412   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.515094   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.018157   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.515152   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.014878   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.514507   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.015181   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.514444   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:02.514543   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:02.540506   45025 cri.go:89] found id: ""
	I1211 00:19:02.540520   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.540528   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:02.540533   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:02.540593   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:02.567414   45025 cri.go:89] found id: ""
	I1211 00:19:02.567427   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.567434   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:02.567439   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:02.567500   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:02.598249   45025 cri.go:89] found id: ""
	I1211 00:19:02.598263   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.598270   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:02.598277   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:02.598348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:02.624793   45025 cri.go:89] found id: ""
	I1211 00:19:02.624807   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.624822   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:02.624828   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:02.624894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:02.654153   45025 cri.go:89] found id: ""
	I1211 00:19:02.654170   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.654177   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:02.654182   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:02.654251   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:02.682217   45025 cri.go:89] found id: ""
	I1211 00:19:02.682231   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.682239   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:02.682244   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:02.682304   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:02.708660   45025 cri.go:89] found id: ""
	I1211 00:19:02.708674   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.708682   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:02.708690   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:02.708700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:02.775902   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:02.775921   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:02.787446   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:02.787463   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:02.857001   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:02.857011   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:02.857022   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:02.927792   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:02.927812   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:05.458523   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:05.468377   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:05.468436   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:05.492943   45025 cri.go:89] found id: ""
	I1211 00:19:05.492957   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.492963   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:05.492968   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:05.493030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:05.520504   45025 cri.go:89] found id: ""
	I1211 00:19:05.520517   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.520525   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:05.520530   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:05.520592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:05.551505   45025 cri.go:89] found id: ""
	I1211 00:19:05.551518   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.551525   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:05.551531   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:05.551586   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:05.580658   45025 cri.go:89] found id: ""
	I1211 00:19:05.580672   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.580679   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:05.580683   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:05.580757   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:05.607012   45025 cri.go:89] found id: ""
	I1211 00:19:05.607026   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.607033   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:05.607038   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:05.607102   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:05.632061   45025 cri.go:89] found id: ""
	I1211 00:19:05.632075   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.632082   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:05.632087   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:05.632152   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:05.658481   45025 cri.go:89] found id: ""
	I1211 00:19:05.658494   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.658514   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:05.658522   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:05.658533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:05.724859   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:05.724876   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:05.735886   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:05.735901   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:05.798612   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:05.798622   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:05.798634   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:05.867342   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:05.867360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:08.400995   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:08.413387   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:08.413449   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:08.448131   45025 cri.go:89] found id: ""
	I1211 00:19:08.448144   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.448151   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:08.448157   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:08.448216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:08.477588   45025 cri.go:89] found id: ""
	I1211 00:19:08.477601   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.477608   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:08.477612   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:08.477671   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:08.502742   45025 cri.go:89] found id: ""
	I1211 00:19:08.502755   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.502763   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:08.502768   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:08.502826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:08.528585   45025 cri.go:89] found id: ""
	I1211 00:19:08.528598   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.528606   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:08.528611   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:08.528674   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:08.559543   45025 cri.go:89] found id: ""
	I1211 00:19:08.559557   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.559564   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:08.559569   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:08.559630   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:08.585362   45025 cri.go:89] found id: ""
	I1211 00:19:08.585377   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.585384   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:08.585390   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:08.585462   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:08.611828   45025 cri.go:89] found id: ""
	I1211 00:19:08.611842   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.611849   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:08.611856   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:08.611866   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:08.678470   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:08.678488   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:08.691361   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:08.691376   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:08.762621   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:08.762636   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:08.762649   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:08.832475   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:08.832493   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:11.361776   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:11.371640   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:11.371694   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:11.398476   45025 cri.go:89] found id: ""
	I1211 00:19:11.398489   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.398496   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:11.398501   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:11.398559   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:11.429955   45025 cri.go:89] found id: ""
	I1211 00:19:11.429969   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.429976   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:11.429982   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:11.430037   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:11.457296   45025 cri.go:89] found id: ""
	I1211 00:19:11.457309   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.457316   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:11.457324   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:11.457382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:11.482941   45025 cri.go:89] found id: ""
	I1211 00:19:11.482954   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.482962   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:11.483012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:11.483069   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:11.508408   45025 cri.go:89] found id: ""
	I1211 00:19:11.508431   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.508438   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:11.508443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:11.508510   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:11.533840   45025 cri.go:89] found id: ""
	I1211 00:19:11.533854   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.533869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:11.533875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:11.533950   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:11.559317   45025 cri.go:89] found id: ""
	I1211 00:19:11.559331   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.559338   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:11.559345   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:11.559354   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:11.626027   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:11.626045   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:11.637884   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:11.637900   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:11.704689   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:11.704700   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:11.704711   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:11.774803   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:11.774821   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.306913   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:14.318077   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:14.318146   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:14.343407   45025 cri.go:89] found id: ""
	I1211 00:19:14.343421   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.343428   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:14.343433   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:14.343497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:14.370322   45025 cri.go:89] found id: ""
	I1211 00:19:14.370336   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.370342   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:14.370348   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:14.370406   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:14.397449   45025 cri.go:89] found id: ""
	I1211 00:19:14.397462   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.397469   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:14.397474   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:14.397531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:14.430459   45025 cri.go:89] found id: ""
	I1211 00:19:14.430472   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.430479   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:14.430501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:14.430595   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:14.461756   45025 cri.go:89] found id: ""
	I1211 00:19:14.461769   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.461776   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:14.461781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:14.461849   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:14.488174   45025 cri.go:89] found id: ""
	I1211 00:19:14.488189   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.488196   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:14.488201   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:14.488258   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:14.517330   45025 cri.go:89] found id: ""
	I1211 00:19:14.517343   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.517350   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:14.517357   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:14.517368   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.549197   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:14.549215   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:14.618908   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:14.618926   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:14.630263   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:14.630279   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:14.698427   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:14.698437   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:14.698453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.273043   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:17.283257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:17.283323   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:17.308437   45025 cri.go:89] found id: ""
	I1211 00:19:17.308450   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.308457   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:17.308462   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:17.308522   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:17.337454   45025 cri.go:89] found id: ""
	I1211 00:19:17.337467   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.337474   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:17.337479   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:17.337538   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:17.363695   45025 cri.go:89] found id: ""
	I1211 00:19:17.363709   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.363717   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:17.363722   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:17.363781   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:17.388300   45025 cri.go:89] found id: ""
	I1211 00:19:17.388314   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.388321   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:17.388327   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:17.388383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:17.418934   45025 cri.go:89] found id: ""
	I1211 00:19:17.418947   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.418954   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:17.418959   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:17.419036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:17.453193   45025 cri.go:89] found id: ""
	I1211 00:19:17.453207   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.453214   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:17.453220   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:17.453308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:17.487806   45025 cri.go:89] found id: ""
	I1211 00:19:17.487820   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.487827   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:17.487834   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:17.487845   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:17.553739   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:17.553758   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:17.564920   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:17.564936   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:17.630666   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:17.630680   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:17.630705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.701596   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:17.701614   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:20.234880   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:20.244988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:20.245050   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:20.273088   45025 cri.go:89] found id: ""
	I1211 00:19:20.273101   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.273109   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:20.273114   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:20.273175   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:20.302062   45025 cri.go:89] found id: ""
	I1211 00:19:20.302076   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.302083   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:20.302089   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:20.302157   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:20.326827   45025 cri.go:89] found id: ""
	I1211 00:19:20.326841   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.326859   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:20.326865   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:20.326922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:20.356288   45025 cri.go:89] found id: ""
	I1211 00:19:20.356302   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.356309   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:20.356315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:20.356375   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:20.382358   45025 cri.go:89] found id: ""
	I1211 00:19:20.382373   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.382380   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:20.382386   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:20.382445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:20.417393   45025 cri.go:89] found id: ""
	I1211 00:19:20.417407   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.417424   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:20.417430   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:20.417488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:20.447521   45025 cri.go:89] found id: ""
	I1211 00:19:20.447534   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.447541   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:20.447550   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:20.447560   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:20.518467   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:20.518484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:20.530666   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:20.530681   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:20.599280   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:20.599290   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:20.599301   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:20.666760   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:20.666778   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.200454   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:23.210413   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:23.210471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:23.234734   45025 cri.go:89] found id: ""
	I1211 00:19:23.234748   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.234756   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:23.234761   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:23.234822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:23.260526   45025 cri.go:89] found id: ""
	I1211 00:19:23.260540   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.260547   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:23.260552   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:23.260611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:23.284278   45025 cri.go:89] found id: ""
	I1211 00:19:23.284291   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.284298   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:23.284303   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:23.284360   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:23.309416   45025 cri.go:89] found id: ""
	I1211 00:19:23.309431   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.309438   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:23.309443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:23.309502   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:23.335667   45025 cri.go:89] found id: ""
	I1211 00:19:23.335682   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.335689   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:23.335695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:23.335751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:23.364847   45025 cri.go:89] found id: ""
	I1211 00:19:23.364862   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.364869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:23.364875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:23.364941   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:23.389436   45025 cri.go:89] found id: ""
	I1211 00:19:23.389449   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.389457   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:23.389464   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:23.389477   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:23.402133   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:23.402149   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:23.484989   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:23.484999   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:23.485010   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:23.553567   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:23.553586   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.583342   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:23.583359   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.151360   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:26.161613   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:26.161676   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:26.187432   45025 cri.go:89] found id: ""
	I1211 00:19:26.187446   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.187453   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:26.187459   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:26.187514   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:26.212567   45025 cri.go:89] found id: ""
	I1211 00:19:26.212581   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.212588   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:26.212593   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:26.212650   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:26.238347   45025 cri.go:89] found id: ""
	I1211 00:19:26.238359   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.238367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:26.238372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:26.238426   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:26.264493   45025 cri.go:89] found id: ""
	I1211 00:19:26.264506   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.264513   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:26.264518   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:26.264578   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:26.289421   45025 cri.go:89] found id: ""
	I1211 00:19:26.289435   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.289442   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:26.289446   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:26.289512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:26.317737   45025 cri.go:89] found id: ""
	I1211 00:19:26.317751   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.317758   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:26.317776   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:26.317832   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:26.342012   45025 cri.go:89] found id: ""
	I1211 00:19:26.342025   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.342032   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:26.342039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:26.342049   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:26.409907   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:26.409925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:26.444709   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:26.444725   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.520673   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:26.520692   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:26.533201   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:26.533217   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:26.595360   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.096255   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:29.106290   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:29.106348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:29.135863   45025 cri.go:89] found id: ""
	I1211 00:19:29.135876   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.135883   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:29.135888   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:29.135948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:29.162996   45025 cri.go:89] found id: ""
	I1211 00:19:29.163011   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.163018   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:29.163024   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:29.163104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:29.189722   45025 cri.go:89] found id: ""
	I1211 00:19:29.189738   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.189745   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:29.189749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:29.189834   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:29.215022   45025 cri.go:89] found id: ""
	I1211 00:19:29.215036   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.215042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:29.215047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:29.215106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:29.240657   45025 cri.go:89] found id: ""
	I1211 00:19:29.240671   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.240679   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:29.240684   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:29.240744   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:29.265406   45025 cri.go:89] found id: ""
	I1211 00:19:29.265420   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.265427   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:29.265432   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:29.265488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:29.289115   45025 cri.go:89] found id: ""
	I1211 00:19:29.289128   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.289136   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:29.289143   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:29.289154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:29.316627   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:29.316646   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:29.381873   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:29.381892   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:29.392836   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:29.392852   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:29.474052   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.474062   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:29.474072   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.041538   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:32.052288   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:32.052353   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:32.078058   45025 cri.go:89] found id: ""
	I1211 00:19:32.078071   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.078078   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:32.078084   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:32.078143   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:32.104226   45025 cri.go:89] found id: ""
	I1211 00:19:32.104240   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.104251   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:32.104256   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:32.104315   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:32.130104   45025 cri.go:89] found id: ""
	I1211 00:19:32.130123   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.130130   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:32.130135   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:32.130196   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:32.156116   45025 cri.go:89] found id: ""
	I1211 00:19:32.156131   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.156138   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:32.156143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:32.156204   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:32.182027   45025 cri.go:89] found id: ""
	I1211 00:19:32.182039   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.182046   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:32.182051   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:32.182119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:32.206462   45025 cri.go:89] found id: ""
	I1211 00:19:32.206476   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.206483   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:32.206488   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:32.206553   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:32.230714   45025 cri.go:89] found id: ""
	I1211 00:19:32.230727   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.230734   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:32.230757   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:32.230773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:32.295411   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:32.295430   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:32.306690   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:32.306705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:32.373425   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:32.373435   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:32.373446   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.441247   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:32.441264   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:34.988442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:34.998718   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:34.998785   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:35.036207   45025 cri.go:89] found id: ""
	I1211 00:19:35.036221   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.036231   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:35.036236   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:35.036298   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:35.062611   45025 cri.go:89] found id: ""
	I1211 00:19:35.062624   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.062631   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:35.062636   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:35.062692   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:35.089089   45025 cri.go:89] found id: ""
	I1211 00:19:35.089102   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.089109   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:35.089115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:35.089177   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:35.116537   45025 cri.go:89] found id: ""
	I1211 00:19:35.116550   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.116558   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:35.116564   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:35.116625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:35.141369   45025 cri.go:89] found id: ""
	I1211 00:19:35.141383   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.141390   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:35.141396   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:35.141464   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:35.167717   45025 cri.go:89] found id: ""
	I1211 00:19:35.167731   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.167738   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:35.167746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:35.167805   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:35.193275   45025 cri.go:89] found id: ""
	I1211 00:19:35.193288   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.193295   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:35.193303   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:35.193313   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:35.223396   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:35.223412   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:35.291423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:35.291442   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:35.302744   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:35.302760   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:35.366712   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:35.366722   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:35.366732   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:37.940570   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:37.951183   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:37.951244   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:37.977384   45025 cri.go:89] found id: ""
	I1211 00:19:37.977412   45025 logs.go:282] 0 containers: []
	W1211 00:19:37.977419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:37.977425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:37.977489   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:38.002327   45025 cri.go:89] found id: ""
	I1211 00:19:38.002341   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.002349   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:38.002354   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:38.002433   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:38.032932   45025 cri.go:89] found id: ""
	I1211 00:19:38.032947   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.032955   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:38.032960   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:38.033023   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:38.060494   45025 cri.go:89] found id: ""
	I1211 00:19:38.060508   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.060516   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:38.060522   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:38.060584   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:38.090424   45025 cri.go:89] found id: ""
	I1211 00:19:38.090438   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.090445   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:38.090450   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:38.090511   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:38.117237   45025 cri.go:89] found id: ""
	I1211 00:19:38.117250   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.117258   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:38.117268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:38.117330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:38.144173   45025 cri.go:89] found id: ""
	I1211 00:19:38.144187   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.144195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:38.144203   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:38.144213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:38.213450   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:38.213474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:38.224711   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:38.224727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:38.292623   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:38.292634   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:38.292644   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:38.360121   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:38.360139   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:40.897394   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:40.907308   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:40.907368   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:40.935845   45025 cri.go:89] found id: ""
	I1211 00:19:40.935861   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.935868   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:40.935874   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:40.935936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:40.961885   45025 cri.go:89] found id: ""
	I1211 00:19:40.961899   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.961906   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:40.961911   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:40.961972   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:40.992115   45025 cri.go:89] found id: ""
	I1211 00:19:40.992129   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.992136   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:40.992141   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:40.992199   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:41.017243   45025 cri.go:89] found id: ""
	I1211 00:19:41.017259   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.017269   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:41.017274   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:41.017355   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:41.046002   45025 cri.go:89] found id: ""
	I1211 00:19:41.046016   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.046022   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:41.046027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:41.046097   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:41.072198   45025 cri.go:89] found id: ""
	I1211 00:19:41.072212   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.072220   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:41.072225   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:41.072297   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:41.097305   45025 cri.go:89] found id: ""
	I1211 00:19:41.097319   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.097326   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:41.097352   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:41.097363   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:41.163075   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:41.163095   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:41.174199   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:41.174214   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:41.239512   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:41.239535   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:41.239556   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:41.311901   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:41.311918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:43.842688   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:43.853001   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:43.853061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:43.877321   45025 cri.go:89] found id: ""
	I1211 00:19:43.877335   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.877342   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:43.877347   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:43.877403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:43.905861   45025 cri.go:89] found id: ""
	I1211 00:19:43.905874   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.905882   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:43.905887   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:43.905948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:43.931275   45025 cri.go:89] found id: ""
	I1211 00:19:43.931289   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.931309   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:43.931315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:43.931383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:43.957472   45025 cri.go:89] found id: ""
	I1211 00:19:43.957485   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.957492   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:43.957497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:43.957556   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:43.987995   45025 cri.go:89] found id: ""
	I1211 00:19:43.988009   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.988016   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:43.988022   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:43.988082   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:44.015918   45025 cri.go:89] found id: ""
	I1211 00:19:44.015934   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.015942   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:44.015948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:44.016028   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:44.044784   45025 cri.go:89] found id: ""
	I1211 00:19:44.044797   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.044804   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:44.044812   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:44.044825   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:44.111423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:44.111440   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:44.122746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:44.122766   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:44.196525   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:44.196536   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:44.196547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:44.264322   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:44.264340   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:46.797073   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:46.807248   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:46.807312   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:46.833629   45025 cri.go:89] found id: ""
	I1211 00:19:46.833643   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.833650   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:46.833656   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:46.833722   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:46.860316   45025 cri.go:89] found id: ""
	I1211 00:19:46.860329   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.860337   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:46.860342   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:46.860403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:46.886240   45025 cri.go:89] found id: ""
	I1211 00:19:46.886253   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.886261   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:46.886265   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:46.886324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:46.911538   45025 cri.go:89] found id: ""
	I1211 00:19:46.911552   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.911559   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:46.911565   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:46.911625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:46.938014   45025 cri.go:89] found id: ""
	I1211 00:19:46.938029   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.938036   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:46.938041   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:46.938105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:46.965253   45025 cri.go:89] found id: ""
	I1211 00:19:46.965267   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.965274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:46.965279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:46.965339   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:46.991686   45025 cri.go:89] found id: ""
	I1211 00:19:46.991699   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.991706   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:46.991714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:46.991727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:47.057610   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:47.057627   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:47.069235   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:47.069251   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:47.137186   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:47.137197   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:47.137220   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:47.206375   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:47.206397   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:49.735135   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:49.745127   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:49.745191   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:49.770237   45025 cri.go:89] found id: ""
	I1211 00:19:49.770250   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.770257   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:49.770262   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:49.770319   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:49.795789   45025 cri.go:89] found id: ""
	I1211 00:19:49.795803   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.795810   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:49.795815   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:49.795872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:49.825306   45025 cri.go:89] found id: ""
	I1211 00:19:49.825319   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.825326   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:49.825331   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:49.825388   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:49.855190   45025 cri.go:89] found id: ""
	I1211 00:19:49.855204   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.855211   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:49.855216   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:49.855281   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:49.881199   45025 cri.go:89] found id: ""
	I1211 00:19:49.881212   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.881219   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:49.881224   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:49.881280   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:49.906616   45025 cri.go:89] found id: ""
	I1211 00:19:49.906629   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.906636   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:49.906641   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:49.906698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:49.933814   45025 cri.go:89] found id: ""
	I1211 00:19:49.933828   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.933835   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:49.933842   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:49.933859   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:49.944994   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:49.945009   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:50.007164   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:50.007174   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:50.007184   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:50.077454   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:50.077472   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:50.110740   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:50.110757   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.683928   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:52.694104   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:52.694167   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:52.725399   45025 cri.go:89] found id: ""
	I1211 00:19:52.725413   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.725420   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:52.725425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:52.725483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:52.751850   45025 cri.go:89] found id: ""
	I1211 00:19:52.751863   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.751870   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:52.751875   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:52.751937   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:52.780571   45025 cri.go:89] found id: ""
	I1211 00:19:52.780584   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.780591   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:52.780595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:52.780653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:52.809728   45025 cri.go:89] found id: ""
	I1211 00:19:52.809741   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.809748   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:52.809753   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:52.809808   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:52.834891   45025 cri.go:89] found id: ""
	I1211 00:19:52.834904   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.834910   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:52.834915   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:52.835007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:52.861606   45025 cri.go:89] found id: ""
	I1211 00:19:52.861619   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.861626   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:52.861631   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:52.861688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:52.888101   45025 cri.go:89] found id: ""
	I1211 00:19:52.888115   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.888122   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:52.888130   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:52.888140   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.953090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:52.953108   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:52.964419   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:52.964435   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:53.034074   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:53.034091   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:53.034102   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:53.105399   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:53.105417   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.638422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:55.648339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:55.648396   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:55.678848   45025 cri.go:89] found id: ""
	I1211 00:19:55.678868   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.678876   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:55.678884   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:55.678953   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:55.718935   45025 cri.go:89] found id: ""
	I1211 00:19:55.718959   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.718987   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:55.718992   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:55.719061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:55.743738   45025 cri.go:89] found id: ""
	I1211 00:19:55.743751   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.743758   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:55.743763   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:55.743822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:55.769117   45025 cri.go:89] found id: ""
	I1211 00:19:55.769130   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.769137   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:55.769143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:55.769207   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:55.795500   45025 cri.go:89] found id: ""
	I1211 00:19:55.795529   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.795537   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:55.795542   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:55.795611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:55.824959   45025 cri.go:89] found id: ""
	I1211 00:19:55.824972   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.824979   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:55.824984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:55.825042   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:55.850737   45025 cri.go:89] found id: ""
	I1211 00:19:55.850750   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.850768   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:55.850776   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:55.850787   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.878584   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:55.878600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:55.943684   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:55.943701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:55.954898   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:55.954914   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:56.024872   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:56.024883   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:56.024893   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.594636   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:58.605403   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:58.605467   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:58.637164   45025 cri.go:89] found id: ""
	I1211 00:19:58.637178   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.637189   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:58.637194   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:58.637252   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:58.682644   45025 cri.go:89] found id: ""
	I1211 00:19:58.682657   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.682664   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:58.682672   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:58.682728   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:58.714474   45025 cri.go:89] found id: ""
	I1211 00:19:58.714488   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.714495   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:58.714500   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:58.714558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:58.745457   45025 cri.go:89] found id: ""
	I1211 00:19:58.745470   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.745484   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:58.745489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:58.745545   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:58.771678   45025 cri.go:89] found id: ""
	I1211 00:19:58.771691   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.771704   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:58.771710   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:58.771770   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:58.796493   45025 cri.go:89] found id: ""
	I1211 00:19:58.796507   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.796514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:58.796519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:58.796576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:58.821870   45025 cri.go:89] found id: ""
	I1211 00:19:58.821884   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.821892   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:58.821899   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:58.821909   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.894510   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:58.894537   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:58.927576   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:58.927595   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:58.994438   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:58.994455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:59.005360   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:59.005377   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:59.073100   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.573622   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:01.584703   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:01.584773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:01.612873   45025 cri.go:89] found id: ""
	I1211 00:20:01.612888   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.612895   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:01.612901   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:01.612964   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:01.641246   45025 cri.go:89] found id: ""
	I1211 00:20:01.641259   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.641267   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:01.641272   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:01.641330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:01.670560   45025 cri.go:89] found id: ""
	I1211 00:20:01.670574   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.670582   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:01.670587   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:01.670652   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:01.697783   45025 cri.go:89] found id: ""
	I1211 00:20:01.697797   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.697804   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:01.697809   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:01.697870   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:01.724991   45025 cri.go:89] found id: ""
	I1211 00:20:01.725005   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.725013   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:01.725019   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:01.725078   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:01.751948   45025 cri.go:89] found id: ""
	I1211 00:20:01.751961   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.751969   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:01.751976   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:01.752036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:01.782191   45025 cri.go:89] found id: ""
	I1211 00:20:01.782204   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.782211   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:01.782218   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:01.782228   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:01.849183   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:01.849203   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:01.863105   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:01.863127   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:01.948480   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.948490   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:01.948501   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:02.031526   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:02.031546   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.563706   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:04.573944   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:04.573999   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:04.604222   45025 cri.go:89] found id: ""
	I1211 00:20:04.604235   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.604242   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:04.604247   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:04.604308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:04.633340   45025 cri.go:89] found id: ""
	I1211 00:20:04.633353   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.633361   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:04.633365   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:04.633427   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:04.663258   45025 cri.go:89] found id: ""
	I1211 00:20:04.663289   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.663297   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:04.663302   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:04.663373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:04.690031   45025 cri.go:89] found id: ""
	I1211 00:20:04.690044   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.690051   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:04.690056   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:04.690112   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:04.716219   45025 cri.go:89] found id: ""
	I1211 00:20:04.716232   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.716240   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:04.716256   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:04.716317   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:04.742460   45025 cri.go:89] found id: ""
	I1211 00:20:04.742474   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.742481   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:04.742497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:04.742564   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:04.774107   45025 cri.go:89] found id: ""
	I1211 00:20:04.774121   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.774128   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:04.774136   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:04.774146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.806436   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:04.806453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:04.872547   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:04.872566   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:04.884075   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:04.884092   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:04.982628   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:04.982638   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:04.982650   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.551877   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:07.561860   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:07.561924   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:07.586162   45025 cri.go:89] found id: ""
	I1211 00:20:07.586175   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.586192   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:07.586198   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:07.586254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:07.611295   45025 cri.go:89] found id: ""
	I1211 00:20:07.611309   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.611316   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:07.611321   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:07.611377   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:07.637224   45025 cri.go:89] found id: ""
	I1211 00:20:07.637237   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.637245   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:07.637249   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:07.637306   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:07.666366   45025 cri.go:89] found id: ""
	I1211 00:20:07.666379   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.666386   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:07.666391   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:07.666451   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:07.691800   45025 cri.go:89] found id: ""
	I1211 00:20:07.691814   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.691822   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:07.691827   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:07.691885   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:07.717290   45025 cri.go:89] found id: ""
	I1211 00:20:07.717304   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.717321   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:07.717326   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:07.717382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:07.747011   45025 cri.go:89] found id: ""
	I1211 00:20:07.747024   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.747031   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:07.747039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:07.747048   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.816300   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:07.816318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:07.850783   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:07.850798   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:07.920354   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:07.920371   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:07.932012   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:07.932027   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:07.996529   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.496978   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:10.507125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:10.507193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:10.532780   45025 cri.go:89] found id: ""
	I1211 00:20:10.532794   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.532801   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:10.532807   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:10.532863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:10.558194   45025 cri.go:89] found id: ""
	I1211 00:20:10.558207   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.558214   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:10.558219   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:10.558277   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:10.583482   45025 cri.go:89] found id: ""
	I1211 00:20:10.583496   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.583503   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:10.583508   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:10.583566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:10.608826   45025 cri.go:89] found id: ""
	I1211 00:20:10.608840   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.608847   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:10.608851   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:10.608910   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:10.637533   45025 cri.go:89] found id: ""
	I1211 00:20:10.637548   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.637554   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:10.637559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:10.637620   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:10.662448   45025 cri.go:89] found id: ""
	I1211 00:20:10.662463   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.662471   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:10.662478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:10.662535   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:10.688164   45025 cri.go:89] found id: ""
	I1211 00:20:10.688187   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.688195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:10.688203   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:10.688213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:10.718946   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:10.718981   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:10.783972   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:10.783992   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:10.795392   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:10.795408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:10.862892   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.862901   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:10.862911   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.437541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:13.447617   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:13.447679   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:13.473117   45025 cri.go:89] found id: ""
	I1211 00:20:13.473131   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.473139   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:13.473144   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:13.473200   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:13.498616   45025 cri.go:89] found id: ""
	I1211 00:20:13.498629   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.498636   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:13.498641   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:13.498698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:13.525802   45025 cri.go:89] found id: ""
	I1211 00:20:13.525824   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.525832   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:13.525836   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:13.525904   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:13.552063   45025 cri.go:89] found id: ""
	I1211 00:20:13.552077   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.552084   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:13.552092   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:13.552153   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:13.576789   45025 cri.go:89] found id: ""
	I1211 00:20:13.576802   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.576809   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:13.576816   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:13.576872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:13.602028   45025 cri.go:89] found id: ""
	I1211 00:20:13.602042   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.602059   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:13.602065   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:13.602120   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:13.629268   45025 cri.go:89] found id: ""
	I1211 00:20:13.629282   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.629299   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:13.629307   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:13.629318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:13.694395   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:13.694413   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:13.705346   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:13.705362   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:13.771138   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:13.771148   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:13.771158   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.842879   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:13.842896   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:16.379425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:16.389574   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:16.389639   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:16.414634   45025 cri.go:89] found id: ""
	I1211 00:20:16.414647   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.414654   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:16.414659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:16.414721   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:16.441274   45025 cri.go:89] found id: ""
	I1211 00:20:16.441287   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.441293   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:16.441298   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:16.441352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:16.466318   45025 cri.go:89] found id: ""
	I1211 00:20:16.466331   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.466338   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:16.466343   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:16.466399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:16.492814   45025 cri.go:89] found id: ""
	I1211 00:20:16.492827   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.492834   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:16.492839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:16.492894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:16.518104   45025 cri.go:89] found id: ""
	I1211 00:20:16.518117   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.518125   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:16.518130   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:16.518193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:16.543245   45025 cri.go:89] found id: ""
	I1211 00:20:16.543260   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.543267   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:16.543272   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:16.543331   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:16.567767   45025 cri.go:89] found id: ""
	I1211 00:20:16.567781   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.567788   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:16.567795   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:16.567806   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:16.635880   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:16.635897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:16.647253   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:16.647269   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:16.711132   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:16.711143   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:16.711154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:16.781461   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:16.781479   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.312031   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:19.322411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:19.322469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:19.349102   45025 cri.go:89] found id: ""
	I1211 00:20:19.349116   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.349124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:19.349129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:19.349190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:19.373803   45025 cri.go:89] found id: ""
	I1211 00:20:19.373818   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.373825   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:19.373830   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:19.373891   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:19.402187   45025 cri.go:89] found id: ""
	I1211 00:20:19.402201   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.402208   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:19.402213   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:19.402274   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:19.427606   45025 cri.go:89] found id: ""
	I1211 00:20:19.427620   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.427628   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:19.427633   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:19.427693   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:19.452647   45025 cri.go:89] found id: ""
	I1211 00:20:19.452660   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.452667   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:19.452671   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:19.452732   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:19.482184   45025 cri.go:89] found id: ""
	I1211 00:20:19.482198   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.482205   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:19.482211   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:19.482266   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:19.508334   45025 cri.go:89] found id: ""
	I1211 00:20:19.508348   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.508355   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:19.508369   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:19.508379   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:19.582679   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:19.582703   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.613878   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:19.613897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:19.688185   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:19.688206   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:19.699902   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:19.699917   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:19.768799   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.269027   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:22.278950   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:22.279030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:22.303632   45025 cri.go:89] found id: ""
	I1211 00:20:22.303646   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.303653   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:22.303659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:22.303714   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:22.329589   45025 cri.go:89] found id: ""
	I1211 00:20:22.329602   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.329647   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:22.329653   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:22.329707   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:22.359724   45025 cri.go:89] found id: ""
	I1211 00:20:22.359737   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.359744   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:22.359749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:22.359806   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:22.385684   45025 cri.go:89] found id: ""
	I1211 00:20:22.385697   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.385704   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:22.385709   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:22.385768   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:22.411515   45025 cri.go:89] found id: ""
	I1211 00:20:22.411529   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.411536   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:22.411541   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:22.411601   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:22.437841   45025 cri.go:89] found id: ""
	I1211 00:20:22.437858   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.437865   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:22.437870   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:22.437926   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:22.462799   45025 cri.go:89] found id: ""
	I1211 00:20:22.462812   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.462819   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:22.462830   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:22.462840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:22.530683   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:22.530700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:22.541777   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:22.541792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:22.606464   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.606473   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:22.606484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:22.675683   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:22.675704   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:25.205679   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:25.215714   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:25.215772   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:25.240624   45025 cri.go:89] found id: ""
	I1211 00:20:25.240637   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.240644   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:25.240650   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:25.240704   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:25.266729   45025 cri.go:89] found id: ""
	I1211 00:20:25.266743   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.266761   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:25.266766   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:25.266833   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:25.292270   45025 cri.go:89] found id: ""
	I1211 00:20:25.292284   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.292291   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:25.292296   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:25.292352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:25.316988   45025 cri.go:89] found id: ""
	I1211 00:20:25.317013   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.317021   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:25.317027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:25.317094   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:25.342079   45025 cri.go:89] found id: ""
	I1211 00:20:25.342092   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.342100   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:25.342105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:25.342166   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:25.369363   45025 cri.go:89] found id: ""
	I1211 00:20:25.369376   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.369383   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:25.369388   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:25.369445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:25.395141   45025 cri.go:89] found id: ""
	I1211 00:20:25.395155   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.395166   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:25.395173   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:25.395183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:25.459743   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:25.459761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:25.470311   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:25.470325   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:25.537864   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:25.537874   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:25.537884   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:25.605782   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:25.605800   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:28.140709   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:28.152210   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:28.152270   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:28.192161   45025 cri.go:89] found id: ""
	I1211 00:20:28.192175   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.192182   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:28.192188   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:28.192254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:28.226107   45025 cri.go:89] found id: ""
	I1211 00:20:28.226121   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.226128   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:28.226133   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:28.226190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:28.252351   45025 cri.go:89] found id: ""
	I1211 00:20:28.252364   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.252371   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:28.252376   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:28.252437   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:28.277856   45025 cri.go:89] found id: ""
	I1211 00:20:28.277869   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.277876   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:28.277882   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:28.277942   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:28.303425   45025 cri.go:89] found id: ""
	I1211 00:20:28.303442   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.303449   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:28.303454   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:28.303533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:28.327952   45025 cri.go:89] found id: ""
	I1211 00:20:28.327965   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.327973   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:28.327978   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:28.328036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:28.352541   45025 cri.go:89] found id: ""
	I1211 00:20:28.352556   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.352563   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:28.352571   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:28.352581   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:28.417587   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:28.417606   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:28.428990   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:28.429005   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:28.493232   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:28.493242   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:28.493252   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:28.561239   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:28.561257   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.093955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:31.104422   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:31.104484   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:31.130996   45025 cri.go:89] found id: ""
	I1211 00:20:31.131011   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.131018   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:31.131023   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:31.131088   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:31.170443   45025 cri.go:89] found id: ""
	I1211 00:20:31.170457   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.170465   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:31.170470   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:31.170531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:31.204748   45025 cri.go:89] found id: ""
	I1211 00:20:31.204769   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.204777   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:31.204781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:31.204846   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:31.235573   45025 cri.go:89] found id: ""
	I1211 00:20:31.235587   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.235594   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:31.235606   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:31.235664   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:31.260669   45025 cri.go:89] found id: ""
	I1211 00:20:31.260683   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.260690   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:31.260695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:31.260753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:31.286253   45025 cri.go:89] found id: ""
	I1211 00:20:31.286267   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.286274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:31.286279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:31.286338   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:31.313885   45025 cri.go:89] found id: ""
	I1211 00:20:31.313903   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.313910   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:31.313917   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:31.313928   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:31.376250   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:31.376260   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:31.376271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:31.445930   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:31.445948   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.477909   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:31.477923   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:31.547558   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:31.547575   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.060343   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:34.071407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:34.071468   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:34.097367   45025 cri.go:89] found id: ""
	I1211 00:20:34.097381   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.097389   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:34.097394   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:34.097455   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:34.125233   45025 cri.go:89] found id: ""
	I1211 00:20:34.125246   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.125253   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:34.125258   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:34.125313   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:34.152711   45025 cri.go:89] found id: ""
	I1211 00:20:34.152724   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.152731   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:34.152735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:34.152797   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:34.183533   45025 cri.go:89] found id: ""
	I1211 00:20:34.183547   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.183553   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:34.183559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:34.183627   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:34.212367   45025 cri.go:89] found id: ""
	I1211 00:20:34.212379   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.212386   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:34.212392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:34.212450   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:34.239991   45025 cri.go:89] found id: ""
	I1211 00:20:34.240005   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.240012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:34.240017   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:34.240084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:34.265795   45025 cri.go:89] found id: ""
	I1211 00:20:34.265809   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.265816   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:34.265823   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:34.265833   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:34.335452   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:34.335471   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:34.366714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:34.366729   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:34.434761   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:34.434779   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.445767   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:34.445782   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:34.513054   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.014301   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:37.029619   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:37.029688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:37.061510   45025 cri.go:89] found id: ""
	I1211 00:20:37.061525   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.061533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:37.061539   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:37.061597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:37.087429   45025 cri.go:89] found id: ""
	I1211 00:20:37.087442   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.087449   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:37.087454   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:37.087513   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:37.113865   45025 cri.go:89] found id: ""
	I1211 00:20:37.113878   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.113885   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:37.113890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:37.113951   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:37.139634   45025 cri.go:89] found id: ""
	I1211 00:20:37.139647   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.139655   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:37.139659   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:37.139723   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:37.177513   45025 cri.go:89] found id: ""
	I1211 00:20:37.177527   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.177535   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:37.177540   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:37.177599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:37.207209   45025 cri.go:89] found id: ""
	I1211 00:20:37.207223   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.207230   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:37.207235   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:37.207291   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:37.235860   45025 cri.go:89] found id: ""
	I1211 00:20:37.235874   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.235880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:37.235888   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:37.235898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:37.302242   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:37.302260   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:37.313364   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:37.313380   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:37.383109   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.383119   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:37.383134   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:37.452480   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:37.452497   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:39.981534   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:39.992011   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:39.992074   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:40.037108   45025 cri.go:89] found id: ""
	I1211 00:20:40.037123   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.037131   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:40.037137   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:40.037205   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:40.073935   45025 cri.go:89] found id: ""
	I1211 00:20:40.073950   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.073958   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:40.073963   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:40.074024   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:40.103233   45025 cri.go:89] found id: ""
	I1211 00:20:40.103247   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.103255   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:40.103260   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:40.103324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:40.130384   45025 cri.go:89] found id: ""
	I1211 00:20:40.130398   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.130405   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:40.130411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:40.130482   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:40.168123   45025 cri.go:89] found id: ""
	I1211 00:20:40.168137   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.168143   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:40.168149   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:40.168209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:40.206729   45025 cri.go:89] found id: ""
	I1211 00:20:40.206743   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.206750   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:40.206755   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:40.206814   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:40.237917   45025 cri.go:89] found id: ""
	I1211 00:20:40.237930   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.237937   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:40.237945   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:40.237954   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:40.306231   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:40.306249   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:40.335237   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:40.335256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:40.407102   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:40.407124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:40.418948   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:40.418987   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:40.487059   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:42.987371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:42.997627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:42.997687   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:43.034834   45025 cri.go:89] found id: ""
	I1211 00:20:43.034847   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.034854   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:43.034858   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:43.034917   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:43.061014   45025 cri.go:89] found id: ""
	I1211 00:20:43.061028   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.061035   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:43.061040   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:43.061111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:43.086728   45025 cri.go:89] found id: ""
	I1211 00:20:43.086742   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.086749   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:43.086754   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:43.086815   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:43.112537   45025 cri.go:89] found id: ""
	I1211 00:20:43.112551   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.112557   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:43.112563   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:43.112619   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:43.138331   45025 cri.go:89] found id: ""
	I1211 00:20:43.138358   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.138365   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:43.138370   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:43.138440   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:43.177883   45025 cri.go:89] found id: ""
	I1211 00:20:43.177895   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.177902   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:43.177908   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:43.177976   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:43.208963   45025 cri.go:89] found id: ""
	I1211 00:20:43.208976   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.208984   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:43.208991   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:43.209001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:43.276100   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:43.276119   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:43.287251   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:43.287266   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:43.358374   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:43.358389   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:43.358399   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:43.430845   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:43.430863   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:45.960980   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:45.971128   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:45.971189   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:45.997483   45025 cri.go:89] found id: ""
	I1211 00:20:45.997497   45025 logs.go:282] 0 containers: []
	W1211 00:20:45.997504   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:45.997509   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:45.997566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:46.030243   45025 cri.go:89] found id: ""
	I1211 00:20:46.030257   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.030265   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:46.030280   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:46.030341   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:46.057812   45025 cri.go:89] found id: ""
	I1211 00:20:46.057826   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.057834   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:46.057839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:46.057896   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:46.094313   45025 cri.go:89] found id: ""
	I1211 00:20:46.094326   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.094334   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:46.094339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:46.094403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:46.120781   45025 cri.go:89] found id: ""
	I1211 00:20:46.120796   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.120803   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:46.120808   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:46.120867   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:46.153078   45025 cri.go:89] found id: ""
	I1211 00:20:46.153091   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.153099   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:46.153105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:46.153164   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:46.184025   45025 cri.go:89] found id: ""
	I1211 00:20:46.184038   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.184045   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:46.184052   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:46.184065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:46.195376   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:46.195391   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:46.264561   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:46.264571   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:46.264583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:46.334575   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:46.334592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:46.365686   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:46.365701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:48.932730   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:48.943221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:48.943289   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:48.970754   45025 cri.go:89] found id: ""
	I1211 00:20:48.970769   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.970775   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:48.970781   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:48.970851   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:48.998179   45025 cri.go:89] found id: ""
	I1211 00:20:48.998193   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.998200   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:48.998205   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:48.998265   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:49.027459   45025 cri.go:89] found id: ""
	I1211 00:20:49.027472   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.027485   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:49.027490   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:49.027554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:49.053666   45025 cri.go:89] found id: ""
	I1211 00:20:49.053693   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.053700   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:49.053705   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:49.053773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:49.080140   45025 cri.go:89] found id: ""
	I1211 00:20:49.080155   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.080162   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:49.080167   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:49.080223   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:49.106258   45025 cri.go:89] found id: ""
	I1211 00:20:49.106281   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.106289   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:49.106294   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:49.106362   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:49.131929   45025 cri.go:89] found id: ""
	I1211 00:20:49.131952   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.131960   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:49.131967   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:49.131978   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:49.216291   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:49.216315   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:49.247289   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:49.247308   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:49.319005   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:49.319026   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:49.330154   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:49.330171   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:49.399415   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:51.899678   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:51.910510   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:51.910571   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:51.941358   45025 cri.go:89] found id: ""
	I1211 00:20:51.941372   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.941379   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:51.941384   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:51.941441   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:51.972273   45025 cri.go:89] found id: ""
	I1211 00:20:51.972287   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.972295   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:51.972300   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:51.972357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:51.998172   45025 cri.go:89] found id: ""
	I1211 00:20:51.998184   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.998191   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:51.998197   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:51.998256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:52.028439   45025 cri.go:89] found id: ""
	I1211 00:20:52.028453   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.028460   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:52.028465   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:52.028526   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:52.060485   45025 cri.go:89] found id: ""
	I1211 00:20:52.060500   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.060508   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:52.060513   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:52.060574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:52.093990   45025 cri.go:89] found id: ""
	I1211 00:20:52.094005   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.094012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:52.094018   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:52.094084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:52.122577   45025 cri.go:89] found id: ""
	I1211 00:20:52.122592   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.122599   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:52.122606   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:52.122624   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:52.191378   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:52.191396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:52.203404   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:52.203421   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:52.272572   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:52.272582   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:52.272592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:52.340655   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:52.340672   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:54.871996   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:54.882238   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:54.882299   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:54.908417   45025 cri.go:89] found id: ""
	I1211 00:20:54.908430   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.908437   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:54.908442   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:54.908512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:54.937462   45025 cri.go:89] found id: ""
	I1211 00:20:54.937475   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.937482   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:54.937487   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:54.937547   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:54.965546   45025 cri.go:89] found id: ""
	I1211 00:20:54.965560   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.965567   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:54.965572   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:54.965629   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:54.991381   45025 cri.go:89] found id: ""
	I1211 00:20:54.991395   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.991403   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:54.991407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:54.991469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:55.023225   45025 cri.go:89] found id: ""
	I1211 00:20:55.023243   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.023251   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:55.023257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:55.023340   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:55.069033   45025 cri.go:89] found id: ""
	I1211 00:20:55.069049   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.069056   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:55.069062   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:55.069130   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:55.104401   45025 cri.go:89] found id: ""
	I1211 00:20:55.104417   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.104424   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:55.104432   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:55.104444   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:55.117919   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:55.117939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:55.207253   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:55.207264   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:55.207275   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:55.285978   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:55.286001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:55.318311   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:55.318327   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:57.883510   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:57.893407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:57.893478   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:57.918657   45025 cri.go:89] found id: ""
	I1211 00:20:57.918670   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.918677   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:57.918684   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:57.918739   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:57.944248   45025 cri.go:89] found id: ""
	I1211 00:20:57.944261   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.944268   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:57.944274   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:57.944337   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:57.969321   45025 cri.go:89] found id: ""
	I1211 00:20:57.969335   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.969342   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:57.969347   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:57.969403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:57.994466   45025 cri.go:89] found id: ""
	I1211 00:20:57.994482   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.994490   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:57.994495   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:57.994554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:58.021937   45025 cri.go:89] found id: ""
	I1211 00:20:58.021954   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.021962   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:58.021967   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:58.022033   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:58.048826   45025 cri.go:89] found id: ""
	I1211 00:20:58.048840   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.048848   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:58.048854   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:58.048912   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:58.077218   45025 cri.go:89] found id: ""
	I1211 00:20:58.077231   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.077239   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:58.077246   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:58.077256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:58.145681   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:58.145698   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:58.191796   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:58.191814   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:58.268737   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:58.268756   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:58.280057   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:58.280074   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:58.347775   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:00.848653   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:00.859447   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:00.859507   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:00.885107   45025 cri.go:89] found id: ""
	I1211 00:21:00.885123   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.885130   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:00.885136   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:00.885195   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:00.916160   45025 cri.go:89] found id: ""
	I1211 00:21:00.916174   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.916181   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:00.916186   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:00.916242   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:00.941904   45025 cri.go:89] found id: ""
	I1211 00:21:00.941918   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.941926   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:00.941931   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:00.941996   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:00.969553   45025 cri.go:89] found id: ""
	I1211 00:21:00.969566   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.969573   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:00.969579   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:00.969640   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:00.995856   45025 cri.go:89] found id: ""
	I1211 00:21:00.995869   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.995876   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:00.995881   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:00.995936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:01.023643   45025 cri.go:89] found id: ""
	I1211 00:21:01.023672   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.023679   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:01.023685   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:01.023753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:01.049959   45025 cri.go:89] found id: ""
	I1211 00:21:01.049972   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.049979   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:01.049986   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:01.049996   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:01.117206   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:01.117224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:01.129158   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:01.129174   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:01.221837   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:01.221848   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:01.221858   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:01.292030   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:01.292052   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:03.824471   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:03.834984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:03.835048   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:03.865620   45025 cri.go:89] found id: ""
	I1211 00:21:03.865633   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.865640   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:03.865646   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:03.865706   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:03.894960   45025 cri.go:89] found id: ""
	I1211 00:21:03.895000   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.895012   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:03.895018   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:03.895093   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:03.922002   45025 cri.go:89] found id: ""
	I1211 00:21:03.922016   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.922033   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:03.922039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:03.922114   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:03.949011   45025 cri.go:89] found id: ""
	I1211 00:21:03.949025   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.949032   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:03.949037   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:03.949104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:03.979941   45025 cri.go:89] found id: ""
	I1211 00:21:03.979955   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.979983   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:03.979988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:03.980056   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:04.005356   45025 cri.go:89] found id: ""
	I1211 00:21:04.005379   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.005386   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:04.005392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:04.005498   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:04.036172   45025 cri.go:89] found id: ""
	I1211 00:21:04.036193   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.036201   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:04.036210   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:04.036224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:04.075735   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:04.075754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:04.141955   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:04.141976   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:04.154375   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:04.154390   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:04.236732   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:04.236744   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:04.236754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:06.812855   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:06.823280   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:06.823348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:06.849675   45025 cri.go:89] found id: ""
	I1211 00:21:06.849689   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.849696   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:06.849701   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:06.849760   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:06.876012   45025 cri.go:89] found id: ""
	I1211 00:21:06.876026   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.876033   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:06.876038   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:06.876095   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:06.901644   45025 cri.go:89] found id: ""
	I1211 00:21:06.901658   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.901664   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:06.901669   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:06.901726   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:06.926863   45025 cri.go:89] found id: ""
	I1211 00:21:06.926877   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.926885   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:06.926890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:06.926946   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:06.956891   45025 cri.go:89] found id: ""
	I1211 00:21:06.956905   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.956912   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:06.956917   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:06.956978   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:06.981741   45025 cri.go:89] found id: ""
	I1211 00:21:06.981754   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.981762   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:06.981767   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:06.981826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:07.007640   45025 cri.go:89] found id: ""
	I1211 00:21:07.007653   45025 logs.go:282] 0 containers: []
	W1211 00:21:07.007660   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:07.007666   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:07.007678   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:07.076566   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:07.076583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:07.087895   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:07.087910   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:07.159453   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:07.159463   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:07.159474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:07.242834   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:07.242853   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:09.772607   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:09.782749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:09.782809   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:09.809021   45025 cri.go:89] found id: ""
	I1211 00:21:09.809035   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.809042   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:09.809048   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:09.809106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:09.837599   45025 cri.go:89] found id: ""
	I1211 00:21:09.837612   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.837619   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:09.837624   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:09.837681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:09.865754   45025 cri.go:89] found id: ""
	I1211 00:21:09.865767   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.865775   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:09.865780   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:09.865841   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:09.890922   45025 cri.go:89] found id: ""
	I1211 00:21:09.890936   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.890943   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:09.890948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:09.891034   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:09.916087   45025 cri.go:89] found id: ""
	I1211 00:21:09.916100   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.916108   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:09.916113   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:09.916169   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:09.941494   45025 cri.go:89] found id: ""
	I1211 00:21:09.941507   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.941514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:09.941520   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:09.941574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:09.967438   45025 cri.go:89] found id: ""
	I1211 00:21:09.967452   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.967460   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:09.967467   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:09.967478   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:10.042566   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:10.042577   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:10.042589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:10.114716   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:10.114734   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:10.147711   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:10.147727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:10.216212   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:10.216230   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:12.728208   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:12.738793   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:12.738852   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:12.765512   45025 cri.go:89] found id: ""
	I1211 00:21:12.765527   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.765534   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:12.765540   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:12.765599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:12.792241   45025 cri.go:89] found id: ""
	I1211 00:21:12.792254   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.792261   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:12.792266   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:12.792326   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:12.821945   45025 cri.go:89] found id: ""
	I1211 00:21:12.821959   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.821966   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:12.821971   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:12.822029   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:12.847567   45025 cri.go:89] found id: ""
	I1211 00:21:12.847581   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.847588   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:12.847593   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:12.847649   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:12.873684   45025 cri.go:89] found id: ""
	I1211 00:21:12.873699   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.873706   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:12.873711   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:12.873769   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:12.899211   45025 cri.go:89] found id: ""
	I1211 00:21:12.899225   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.899233   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:12.899241   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:12.899301   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:12.925366   45025 cri.go:89] found id: ""
	I1211 00:21:12.925380   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.925387   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:12.925395   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:12.925408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:12.992650   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:12.992667   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:13.004006   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:13.004021   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:13.070046   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:13.070055   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:13.070065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:13.137969   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:13.137986   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:15.678794   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:15.688954   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:15.689022   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:15.714099   45025 cri.go:89] found id: ""
	I1211 00:21:15.714113   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.714120   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:15.714125   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:15.714190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:15.738722   45025 cri.go:89] found id: ""
	I1211 00:21:15.738735   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.738742   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:15.738747   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:15.738801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:15.764238   45025 cri.go:89] found id: ""
	I1211 00:21:15.764251   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.764258   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:15.764269   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:15.764330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:15.789987   45025 cri.go:89] found id: ""
	I1211 00:21:15.790000   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.790007   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:15.790012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:15.790066   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:15.815536   45025 cri.go:89] found id: ""
	I1211 00:21:15.815549   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.815556   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:15.815567   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:15.815626   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:15.840404   45025 cri.go:89] found id: ""
	I1211 00:21:15.840424   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.840433   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:15.840438   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:15.840497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:15.865028   45025 cri.go:89] found id: ""
	I1211 00:21:15.865041   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.865048   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:15.865054   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:15.865064   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:15.930832   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:15.930850   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:15.942270   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:15.942285   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:16.008579   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:16.008589   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:16.008600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:16.086023   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:16.086047   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.616564   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:18.627177   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:18.627235   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:18.655749   45025 cri.go:89] found id: ""
	I1211 00:21:18.655763   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.655771   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:18.655776   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:18.655838   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:18.685932   45025 cri.go:89] found id: ""
	I1211 00:21:18.685946   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.685953   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:18.685958   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:18.686019   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:18.713761   45025 cri.go:89] found id: ""
	I1211 00:21:18.713775   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.713783   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:18.713788   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:18.713847   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:18.740458   45025 cri.go:89] found id: ""
	I1211 00:21:18.740472   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.740480   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:18.740485   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:18.740540   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:18.766011   45025 cri.go:89] found id: ""
	I1211 00:21:18.766025   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.766032   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:18.766036   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:18.766092   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:18.791387   45025 cri.go:89] found id: ""
	I1211 00:21:18.791401   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.791409   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:18.791414   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:18.791471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:18.817326   45025 cri.go:89] found id: ""
	I1211 00:21:18.817340   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.817347   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:18.817354   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:18.817366   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:18.885570   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:18.885581   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:18.885592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:18.953656   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:18.953674   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.981613   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:18.981629   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:19.048252   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:19.048271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.561008   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:21.571125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:21.571184   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:21.597491   45025 cri.go:89] found id: ""
	I1211 00:21:21.597505   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.597512   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:21.597520   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:21.597576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:21.623022   45025 cri.go:89] found id: ""
	I1211 00:21:21.623040   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.623047   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:21.623052   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:21.623109   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:21.648127   45025 cri.go:89] found id: ""
	I1211 00:21:21.648141   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.648148   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:21.648154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:21.648212   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:21.673563   45025 cri.go:89] found id: ""
	I1211 00:21:21.673577   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.673584   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:21.673589   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:21.673646   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:21.701744   45025 cri.go:89] found id: ""
	I1211 00:21:21.701757   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.701764   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:21.701769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:21.701830   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:21.727163   45025 cri.go:89] found id: ""
	I1211 00:21:21.727177   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.727184   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:21.727189   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:21.727247   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:21.753680   45025 cri.go:89] found id: ""
	I1211 00:21:21.753694   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.753702   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:21.753709   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:21.753720   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.764845   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:21.764862   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:21.825854   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:21.825865   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:21.825877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:21.895180   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:21.895198   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:21.923512   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:21.923530   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.495120   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:24.505627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:24.505701   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:24.532103   45025 cri.go:89] found id: ""
	I1211 00:21:24.532117   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.532124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:24.532129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:24.532183   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:24.561426   45025 cri.go:89] found id: ""
	I1211 00:21:24.561439   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.561447   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:24.561451   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:24.561509   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:24.591493   45025 cri.go:89] found id: ""
	I1211 00:21:24.591506   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.591514   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:24.591519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:24.591582   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:24.618513   45025 cri.go:89] found id: ""
	I1211 00:21:24.618527   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.618534   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:24.618539   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:24.618596   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:24.644876   45025 cri.go:89] found id: ""
	I1211 00:21:24.644890   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.644899   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:24.644904   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:24.644963   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:24.674148   45025 cri.go:89] found id: ""
	I1211 00:21:24.674161   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.674168   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:24.674174   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:24.674236   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:24.700184   45025 cri.go:89] found id: ""
	I1211 00:21:24.700198   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.700205   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:24.700212   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:24.700222   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.765329   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:24.765346   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:24.776593   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:24.776608   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:24.844320   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:24.844329   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:24.844342   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:24.912094   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:24.912111   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.443355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:27.454562   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:27.454628   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:27.480512   45025 cri.go:89] found id: ""
	I1211 00:21:27.480526   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.480533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:27.480538   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:27.480604   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:27.507028   45025 cri.go:89] found id: ""
	I1211 00:21:27.507041   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.507049   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:27.507054   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:27.507111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:27.533346   45025 cri.go:89] found id: ""
	I1211 00:21:27.533360   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.533367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:27.533372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:27.533435   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:27.563021   45025 cri.go:89] found id: ""
	I1211 00:21:27.563034   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.563042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:27.563047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:27.563105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:27.587813   45025 cri.go:89] found id: ""
	I1211 00:21:27.587831   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.587838   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:27.587843   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:27.587900   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:27.616925   45025 cri.go:89] found id: ""
	I1211 00:21:27.616938   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.616945   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:27.616951   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:27.617007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:27.642256   45025 cri.go:89] found id: ""
	I1211 00:21:27.642269   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.642276   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:27.642283   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:27.642294   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:27.653306   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:27.653326   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:27.716428   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:27.716438   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:27.716455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:27.783513   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:27.783533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.814010   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:27.814025   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:30.382748   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:30.393371   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:30.393432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:30.427609   45025 cri.go:89] found id: ""
	I1211 00:21:30.427623   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.427629   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:30.427635   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:30.427696   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:30.457893   45025 cri.go:89] found id: ""
	I1211 00:21:30.457907   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.457913   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:30.457918   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:30.457980   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:30.492222   45025 cri.go:89] found id: ""
	I1211 00:21:30.492234   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.492241   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:30.492246   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:30.492303   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:30.521511   45025 cri.go:89] found id: ""
	I1211 00:21:30.521525   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.521532   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:30.521537   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:30.521597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:30.547821   45025 cri.go:89] found id: ""
	I1211 00:21:30.547835   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.547842   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:30.547847   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:30.547906   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:30.572652   45025 cri.go:89] found id: ""
	I1211 00:21:30.572666   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.572675   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:30.572681   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:30.572737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:30.601878   45025 cri.go:89] found id: ""
	I1211 00:21:30.601906   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.601914   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:30.601921   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:30.601932   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:30.613084   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:30.613100   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:30.683127   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:30.683136   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:30.683146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:30.750689   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:30.750707   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:30.784168   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:30.784183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.353720   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:33.363733   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:33.363790   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:33.391890   45025 cri.go:89] found id: ""
	I1211 00:21:33.391904   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.391911   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:33.391917   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:33.391984   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:33.423803   45025 cri.go:89] found id: ""
	I1211 00:21:33.423816   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.423823   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:33.423828   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:33.423889   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:33.458122   45025 cri.go:89] found id: ""
	I1211 00:21:33.458135   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.458142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:33.458147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:33.458206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:33.485705   45025 cri.go:89] found id: ""
	I1211 00:21:33.485718   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.485725   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:33.485730   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:33.485786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:33.513596   45025 cri.go:89] found id: ""
	I1211 00:21:33.513609   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.513617   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:33.513622   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:33.513681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:33.539390   45025 cri.go:89] found id: ""
	I1211 00:21:33.539403   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.539412   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:33.539418   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:33.539474   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:33.564837   45025 cri.go:89] found id: ""
	I1211 00:21:33.564849   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.564856   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:33.564863   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:33.564873   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.629883   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:33.629902   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:33.641102   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:33.641118   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:33.708725   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:33.708736   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:33.708746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:33.777920   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:33.777939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.306840   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:36.318198   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:36.318256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:36.347923   45025 cri.go:89] found id: ""
	I1211 00:21:36.347936   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.347943   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:36.347948   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:36.348003   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:36.372908   45025 cri.go:89] found id: ""
	I1211 00:21:36.372921   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.372928   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:36.372934   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:36.372994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:36.398449   45025 cri.go:89] found id: ""
	I1211 00:21:36.398462   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.398470   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:36.398478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:36.398533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:36.438503   45025 cri.go:89] found id: ""
	I1211 00:21:36.438516   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.438523   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:36.438528   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:36.438585   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:36.468232   45025 cri.go:89] found id: ""
	I1211 00:21:36.468245   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.468253   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:36.468257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:36.468318   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:36.494076   45025 cri.go:89] found id: ""
	I1211 00:21:36.494089   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.494096   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:36.494101   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:36.494168   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:36.521654   45025 cri.go:89] found id: ""
	I1211 00:21:36.521668   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.521676   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:36.521689   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:36.521700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:36.590822   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:36.590840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.620876   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:36.620891   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:36.689379   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:36.689396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:36.700340   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:36.700355   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:36.768766   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:39.270429   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:39.280501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:39.280558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:39.308182   45025 cri.go:89] found id: ""
	I1211 00:21:39.308203   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.308212   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:39.308218   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:39.308278   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:39.334096   45025 cri.go:89] found id: ""
	I1211 00:21:39.334113   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.334123   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:39.334132   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:39.334203   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:39.360088   45025 cri.go:89] found id: ""
	I1211 00:21:39.360101   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.360108   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:39.360115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:39.360174   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:39.386315   45025 cri.go:89] found id: ""
	I1211 00:21:39.386328   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.386336   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:39.386341   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:39.386399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:39.418994   45025 cri.go:89] found id: ""
	I1211 00:21:39.419008   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.419015   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:39.419020   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:39.419081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:39.446027   45025 cri.go:89] found id: ""
	I1211 00:21:39.446040   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.446047   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:39.446052   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:39.446119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:39.474854   45025 cri.go:89] found id: ""
	I1211 00:21:39.474867   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.474880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:39.474888   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:39.474898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:39.548615   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:39.548635   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:39.577039   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:39.577058   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:39.643644   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:39.643662   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:39.654782   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:39.654797   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:39.721483   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.221753   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:42.234138   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:42.234209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:42.265615   45025 cri.go:89] found id: ""
	I1211 00:21:42.265631   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.265639   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:42.265645   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:42.265716   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:42.295341   45025 cri.go:89] found id: ""
	I1211 00:21:42.295357   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.295365   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:42.295371   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:42.295432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:42.324010   45025 cri.go:89] found id: ""
	I1211 00:21:42.324025   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.324032   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:42.324039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:42.324101   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:42.355998   45025 cri.go:89] found id: ""
	I1211 00:21:42.356012   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.356020   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:42.356025   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:42.356087   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:42.385254   45025 cri.go:89] found id: ""
	I1211 00:21:42.385267   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.385275   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:42.385279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:42.385379   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:42.418942   45025 cri.go:89] found id: ""
	I1211 00:21:42.418956   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.418986   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:42.418993   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:42.419049   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:42.446484   45025 cri.go:89] found id: ""
	I1211 00:21:42.446497   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.446504   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:42.446511   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:42.446522   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:42.521774   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:42.521792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:42.533107   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:42.533124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:42.601857   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.601867   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:42.601877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:42.670754   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:42.670773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.205036   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:45.223242   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:45.223325   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:45.290545   45025 cri.go:89] found id: ""
	I1211 00:21:45.290560   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.290567   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:45.290580   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:45.290653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:45.321549   45025 cri.go:89] found id: ""
	I1211 00:21:45.321562   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.321581   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:45.321587   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:45.321660   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:45.351332   45025 cri.go:89] found id: ""
	I1211 00:21:45.351345   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.351353   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:45.351358   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:45.351418   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:45.377195   45025 cri.go:89] found id: ""
	I1211 00:21:45.377208   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.377215   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:45.377221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:45.377284   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:45.414830   45025 cri.go:89] found id: ""
	I1211 00:21:45.414844   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.414852   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:45.414857   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:45.414922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:45.444982   45025 cri.go:89] found id: ""
	I1211 00:21:45.444996   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.445003   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:45.445008   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:45.445065   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:45.475344   45025 cri.go:89] found id: ""
	I1211 00:21:45.475358   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.475365   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:45.475372   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:45.475388   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:45.544982   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:45.545000   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.578028   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:45.578044   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:45.650334   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:45.650360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:45.661530   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:45.661547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:45.726146   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.226425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:48.236595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:48.236655   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:48.264517   45025 cri.go:89] found id: ""
	I1211 00:21:48.264531   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.264538   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:48.264544   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:48.264602   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:48.291335   45025 cri.go:89] found id: ""
	I1211 00:21:48.291349   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.291356   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:48.291361   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:48.291420   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:48.317975   45025 cri.go:89] found id: ""
	I1211 00:21:48.317996   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.318005   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:48.318010   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:48.318090   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:48.343743   45025 cri.go:89] found id: ""
	I1211 00:21:48.343757   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.343764   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:48.343769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:48.343839   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:48.370548   45025 cri.go:89] found id: ""
	I1211 00:21:48.370561   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.370568   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:48.370573   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:48.370633   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:48.398956   45025 cri.go:89] found id: ""
	I1211 00:21:48.398991   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.398999   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:48.399004   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:48.399081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:48.432879   45025 cri.go:89] found id: ""
	I1211 00:21:48.432892   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.432900   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:48.432908   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:48.432918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:48.514612   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:48.514631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:48.526574   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:48.526589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:48.594430   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.594439   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:48.594449   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:48.662467   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:48.662487   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:51.193260   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:51.203850   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:51.203909   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:51.229218   45025 cri.go:89] found id: ""
	I1211 00:21:51.229232   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.229240   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:51.229249   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:51.229307   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:51.255535   45025 cri.go:89] found id: ""
	I1211 00:21:51.255549   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.255556   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:51.255561   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:51.255617   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:51.281281   45025 cri.go:89] found id: ""
	I1211 00:21:51.281295   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.281302   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:51.281306   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:51.281366   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:51.305242   45025 cri.go:89] found id: ""
	I1211 00:21:51.305256   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.305263   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:51.305268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:51.305324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:51.330682   45025 cri.go:89] found id: ""
	I1211 00:21:51.330695   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.330712   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:51.330717   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:51.330786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:51.361324   45025 cri.go:89] found id: ""
	I1211 00:21:51.361338   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.361345   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:51.361351   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:51.361410   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:51.387177   45025 cri.go:89] found id: ""
	I1211 00:21:51.387191   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.387198   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:51.387205   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:51.387216   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:51.461910   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:51.461927   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:51.473746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:51.473761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:51.542962   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:51.542994   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:51.543008   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:51.611981   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:51.612003   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:54.140885   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:54.151154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:54.151216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:54.177398   45025 cri.go:89] found id: ""
	I1211 00:21:54.177412   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.177419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:54.177424   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:54.177483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:54.202665   45025 cri.go:89] found id: ""
	I1211 00:21:54.202679   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.202686   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:54.202691   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:54.202751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:54.228121   45025 cri.go:89] found id: ""
	I1211 00:21:54.228135   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.228142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:54.228147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:54.228206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:54.254699   45025 cri.go:89] found id: ""
	I1211 00:21:54.254713   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.254726   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:54.254732   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:54.254794   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:54.280912   45025 cri.go:89] found id: ""
	I1211 00:21:54.280926   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.280934   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:54.280939   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:54.281000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:54.309917   45025 cri.go:89] found id: ""
	I1211 00:21:54.309930   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.309937   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:54.309943   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:54.310000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:54.335081   45025 cri.go:89] found id: ""
	I1211 00:21:54.335094   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.335102   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:54.335110   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:54.335120   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:54.402799   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:54.402819   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:54.423966   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:54.423982   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:54.493676   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:54.493685   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:54.493695   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:54.562184   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:54.562202   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.095145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:57.105735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:57.105793   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:57.137586   45025 cri.go:89] found id: ""
	I1211 00:21:57.137600   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.137607   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:57.137612   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:57.137669   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:57.162960   45025 cri.go:89] found id: ""
	I1211 00:21:57.162997   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.163004   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:57.163009   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:57.163068   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:57.189960   45025 cri.go:89] found id: ""
	I1211 00:21:57.189982   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.189989   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:57.189994   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:57.190059   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:57.215046   45025 cri.go:89] found id: ""
	I1211 00:21:57.215059   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.215067   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:57.215072   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:57.215129   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:57.239646   45025 cri.go:89] found id: ""
	I1211 00:21:57.239659   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.239678   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:57.239682   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:57.239737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:57.264818   45025 cri.go:89] found id: ""
	I1211 00:21:57.264832   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.264839   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:57.264844   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:57.264913   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:57.290063   45025 cri.go:89] found id: ""
	I1211 00:21:57.290076   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.290083   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:57.290090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:57.290103   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:57.300820   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:57.300834   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:57.366226   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:57.366236   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:57.366246   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:57.435439   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:57.435458   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.464292   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:57.464311   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.034825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:00.107263   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:22:00.107592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:22:00.209037   45025 cri.go:89] found id: ""
	I1211 00:22:00.209052   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.209060   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:22:00.209065   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:22:00.209139   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:22:00.259397   45025 cri.go:89] found id: ""
	I1211 00:22:00.259413   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.259420   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:22:00.259426   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:22:00.259499   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:22:00.300996   45025 cri.go:89] found id: ""
	I1211 00:22:00.301011   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.301020   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:22:00.301026   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:22:00.301121   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:22:00.355749   45025 cri.go:89] found id: ""
	I1211 00:22:00.355766   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.355775   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:22:00.355782   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:22:00.355863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:22:00.397265   45025 cri.go:89] found id: ""
	I1211 00:22:00.397279   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.397287   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:22:00.397292   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:22:00.397357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:22:00.431985   45025 cri.go:89] found id: ""
	I1211 00:22:00.432000   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.432008   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:22:00.432014   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:22:00.432079   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:22:00.475122   45025 cri.go:89] found id: ""
	I1211 00:22:00.475138   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.475145   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:22:00.475154   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:22:00.475165   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.544019   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:22:00.544039   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:22:00.556109   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:22:00.556126   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:22:00.625124   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:22:00.625135   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:22:00.625146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:22:00.693368   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:22:00.693387   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:22:03.226119   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:03.236558   45025 kubeadm.go:602] duration metric: took 4m3.502420888s to restartPrimaryControlPlane
	W1211 00:22:03.236621   45025 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 00:22:03.236698   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:22:03.653513   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:22:03.666451   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:22:03.674394   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:22:03.674497   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:22:03.682496   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:22:03.682506   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:22:03.682556   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:22:03.690253   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:22:03.690312   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:22:03.697814   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:22:03.705532   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:22:03.705584   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:22:03.712909   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.720642   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:22:03.720704   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.728085   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:22:03.735639   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:22:03.735694   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:22:03.743458   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:22:03.864690   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:22:03.865125   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:22:03.931571   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:26:05.371070   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1211 00:26:05.371093   45025 kubeadm.go:319] 
	I1211 00:26:05.371179   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:26:05.375684   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.375734   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:05.375839   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:05.375903   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:05.375950   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:05.375995   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:05.376042   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:05.376088   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:05.376135   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:05.376181   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:05.376229   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:05.376273   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:05.376319   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:05.376364   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:05.376435   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:05.376530   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:05.376618   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:05.376680   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:05.379737   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:05.379839   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:05.379918   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:05.380012   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:05.380083   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:05.380156   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:05.380207   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:05.380283   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:05.380352   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:05.380433   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:05.380508   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:05.380558   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:05.380610   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:05.380656   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:05.380709   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:05.380759   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:05.380821   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:05.380871   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:05.380957   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:05.381029   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:05.383945   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:05.384057   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:05.384159   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:05.384228   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:05.384331   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:05.384422   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:05.384548   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:05.384657   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:05.384704   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:05.384857   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:05.384973   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:26:05.385047   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001182146s
	I1211 00:26:05.385051   45025 kubeadm.go:319] 
	I1211 00:26:05.385122   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:26:05.385153   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:26:05.385275   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:26:05.385279   45025 kubeadm.go:319] 
	I1211 00:26:05.385390   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:26:05.385422   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:26:05.385452   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:26:05.385461   45025 kubeadm.go:319] 
	W1211 00:26:05.385565   45025 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182146s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1211 00:26:05.385656   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:26:05.805014   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:26:05.817222   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:26:05.817275   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:26:05.825148   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:26:05.825157   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:26:05.825207   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:26:05.832932   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:26:05.832991   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:26:05.840249   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:26:05.848087   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:26:05.848149   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:26:05.855944   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.863906   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:26:05.863960   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.871464   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:26:05.879062   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:26:05.879116   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:26:05.886444   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:26:05.923722   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.924046   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:06.002092   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:06.002152   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:06.002191   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:06.002233   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:06.002283   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:06.002332   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:06.002377   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:06.002429   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:06.002486   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:06.002528   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:06.002578   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:06.002626   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:06.076323   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:06.076462   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:06.076570   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:06.087446   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:06.092847   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:06.092964   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:06.093051   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:06.093134   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:06.093195   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:06.093273   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:06.093327   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:06.093390   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:06.093452   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:06.093529   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:06.093602   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:06.093639   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:06.093696   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:06.504239   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:06.701840   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:07.114481   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:07.226723   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:07.349377   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:07.350330   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:07.353007   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:07.356354   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:07.356511   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:07.356601   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:07.356672   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:07.373379   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:07.373693   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:07.381535   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:07.381916   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:07.382096   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:07.509380   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:07.509514   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:30:07.509220   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00004985s
	I1211 00:30:07.509346   45025 kubeadm.go:319] 
	I1211 00:30:07.509429   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:30:07.509464   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:30:07.509569   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:30:07.509574   45025 kubeadm.go:319] 
	I1211 00:30:07.509677   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:30:07.509708   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:30:07.509737   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:30:07.509740   45025 kubeadm.go:319] 
	I1211 00:30:07.513952   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:30:07.514370   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:30:07.514477   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:30:07.514741   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1211 00:30:07.514745   45025 kubeadm.go:319] 
	I1211 00:30:07.514828   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:30:07.514885   45025 kubeadm.go:403] duration metric: took 12m7.817411267s to StartCluster
	I1211 00:30:07.514914   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:30:07.514994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:30:07.541269   45025 cri.go:89] found id: ""
	I1211 00:30:07.541283   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.541291   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:30:07.541299   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:30:07.541373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:30:07.568371   45025 cri.go:89] found id: ""
	I1211 00:30:07.568385   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.568392   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:30:07.568397   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:30:07.568452   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:30:07.593463   45025 cri.go:89] found id: ""
	I1211 00:30:07.593477   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.593484   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:30:07.593489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:30:07.593551   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:30:07.617718   45025 cri.go:89] found id: ""
	I1211 00:30:07.617732   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.617739   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:30:07.617746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:30:07.617801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:30:07.644176   45025 cri.go:89] found id: ""
	I1211 00:30:07.644190   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.644197   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:30:07.644202   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:30:07.644260   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:30:07.673956   45025 cri.go:89] found id: ""
	I1211 00:30:07.673970   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.673977   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:30:07.673982   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:30:07.674040   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:30:07.699591   45025 cri.go:89] found id: ""
	I1211 00:30:07.699605   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.699612   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:30:07.699619   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:30:07.699631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:30:07.710731   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:30:07.710746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:30:07.782904   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:30:07.782915   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:30:07.782925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:30:07.853292   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:30:07.853310   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:30:07.882071   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:30:07.882089   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 00:30:07.951740   45025 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1211 00:30:07.951780   45025 out.go:285] * 
	W1211 00:30:07.951888   45025 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.951950   45025 out.go:285] * 
	W1211 00:30:07.954090   45025 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:30:07.959721   45025 out.go:203] 
	W1211 00:30:07.962947   45025 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.963287   45025 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1211 00:30:07.963357   45025 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1211 00:30:07.966374   45025 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559941409Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559978333Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560024989Z" level=info msg="Create NRI interface"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560126324Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560135908Z" level=info msg="runtime interface created"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560147707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560154386Z" level=info msg="runtime interface starting up..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560161025Z" level=info msg="starting plugins..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560173825Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560247985Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:17:58 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.935283532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6de6e87e-5991-43bc-b331-3c4da3939cd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936110736Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ed5fc17-8833-4a00-b49a-175298f161c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936663858Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5386bae8-3763-43a3-8e84-b7f98f5b64ad name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937146602Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bb2a1f8a-e043-498b-9aaf-3f590536bef8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937597116Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=46623a2c-7e86-46f1-9f50-faf880a0f7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938029611Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aacc6a08-88ba-4e77-9e82-199f5f521e79 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938428834Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1070a90a-4ba9-466d-bf22-501c564282df name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.079523143Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a3e30ef-9eb4-44b7-80b3-789735758754 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080212934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1702c38-afbc-48d3-aaa7-dbad7d98554e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080781133Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4a323cb1-ab88-481e-9cee-f539f47c462d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.081259674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11defec5-3e05-48c6-9020-9fbe1396c100 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08179049Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=befd3141-5ed6-4610-bc01-9a813a131605 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08229226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=882be104-d73f-4553-a30a-8e88aacff392 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.082743281Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c95dd536-8fa6-4a4b-9d2c-8647b294d5c0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:32:12.439821   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:12.440230   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:12.441697   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:12.442007   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:12.443413   23313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:32:12 up 43 min,  0 user,  load average: 0.76, 0.43, 0.44
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:32:09 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:10 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1124.
	Dec 11 00:32:10 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:10 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:10 functional-786978 kubelet[23157]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:10 functional-786978 kubelet[23157]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:10 functional-786978 kubelet[23157]: E1211 00:32:10.468227   23157 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:10 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:10 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:11 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1125.
	Dec 11 00:32:11 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:11 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:11 functional-786978 kubelet[23204]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:11 functional-786978 kubelet[23204]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:11 functional-786978 kubelet[23204]: E1211 00:32:11.217903   23204 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:11 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:11 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:11 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1126.
	Dec 11 00:32:11 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:11 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:11 functional-786978 kubelet[23227]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:11 functional-786978 kubelet[23227]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:11 functional-786978 kubelet[23227]: E1211 00:32:11.959281   23227 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:11 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:11 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (333.930677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-786978 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-786978 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (58.030278ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-786978 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-786978 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-786978 describe po hello-node-connect: exit status 1 (62.615896ms)

                                                
                                                
** stderr ** 
	E1211 00:31:56.537017   59178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.538635   59178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.540055   59178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.541460   59178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.542844   59178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-786978 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-786978 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-786978 logs -l app=hello-node-connect: exit status 1 (61.664934ms)

                                                
                                                
** stderr ** 
	E1211 00:31:56.600422   59183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.601970   59183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.603460   59183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.604901   59183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-786978 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-786978 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-786978 describe svc hello-node-connect: exit status 1 (64.50086ms)

                                                
                                                
** stderr ** 
	E1211 00:31:56.663430   59187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.665040   59187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.666509   59187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.667959   59187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.669359   59187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-786978 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (325.310603ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-786978 cache reload                                                                                                                              │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ ssh     │ functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                     │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │ 11 Dec 25 00:17 UTC │
	│ kubectl │ functional-786978 kubectl -- --context functional-786978 get pods                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ start   │ -p functional-786978 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:17 UTC │                     │
	│ cp      │ functional-786978 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ config  │ functional-786978 config unset cpus                                                                                                                         │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ config  │ functional-786978 config get cpus                                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │                     │
	│ config  │ functional-786978 config set cpus 2                                                                                                                         │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ config  │ functional-786978 config get cpus                                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ config  │ functional-786978 config unset cpus                                                                                                                         │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ ssh     │ functional-786978 ssh -n functional-786978 sudo cat /home/docker/cp-test.txt                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ config  │ functional-786978 config get cpus                                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │                     │
	│ ssh     │ functional-786978 ssh echo hello                                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ cp      │ functional-786978 cp functional-786978:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp829540402/001/cp-test.txt │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ ssh     │ functional-786978 ssh cat /etc/hostname                                                                                                                     │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ ssh     │ functional-786978 ssh -n functional-786978 sudo cat /home/docker/cp-test.txt                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ tunnel  │ functional-786978 tunnel --alsologtostderr                                                                                                                  │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │                     │
	│ tunnel  │ functional-786978 tunnel --alsologtostderr                                                                                                                  │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │                     │
	│ cp      │ functional-786978 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                   │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ tunnel  │ functional-786978 tunnel --alsologtostderr                                                                                                                  │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │                     │
	│ ssh     │ functional-786978 ssh -n functional-786978 sudo cat /tmp/does/not/exist/cp-test.txt                                                                         │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:30 UTC │ 11 Dec 25 00:30 UTC │
	│ addons  │ functional-786978 addons list                                                                                                                               │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:31 UTC │ 11 Dec 25 00:31 UTC │
	│ addons  │ functional-786978 addons list -o json                                                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:31 UTC │ 11 Dec 25 00:31 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:17:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:17:55.340423   45025 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:17:55.340537   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340541   45025 out.go:374] Setting ErrFile to fd 2...
	I1211 00:17:55.340544   45025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:17:55.340791   45025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:17:55.341139   45025 out.go:368] Setting JSON to false
	I1211 00:17:55.342235   45025 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1762,"bootTime":1765410514,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:17:55.342290   45025 start.go:143] virtualization:  
	I1211 00:17:55.345626   45025 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:17:55.349437   45025 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:17:55.349518   45025 notify.go:221] Checking for updates...
	I1211 00:17:55.355612   45025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:17:55.358489   45025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:17:55.361319   45025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:17:55.364268   45025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:17:55.367246   45025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:17:55.370742   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:55.370850   45025 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:17:55.397690   45025 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:17:55.397801   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.502686   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.493021097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.502775   45025 docker.go:319] overlay module found
	I1211 00:17:55.506026   45025 out.go:179] * Using the docker driver based on existing profile
	I1211 00:17:55.508857   45025 start.go:309] selected driver: docker
	I1211 00:17:55.508866   45025 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.508963   45025 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:17:55.509064   45025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:17:55.563622   45025 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-11 00:17:55.55460881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:17:55.564041   45025 start_flags.go:1131] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 00:17:55.564074   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:55.564121   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:55.564168   45025 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:55.567337   45025 out.go:179] * Starting "functional-786978" primary control-plane node in "functional-786978" cluster
	I1211 00:17:55.570124   45025 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 00:17:55.572957   45025 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 00:17:55.575721   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:55.575758   45025 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 00:17:55.575767   45025 cache.go:65] Caching tarball of preloaded images
	I1211 00:17:55.575808   45025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 00:17:55.575848   45025 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 00:17:55.575857   45025 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 00:17:55.575972   45025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/config.json ...
	I1211 00:17:55.595069   45025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 00:17:55.595078   45025 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 00:17:55.595099   45025 cache.go:243] Successfully downloaded all kic artifacts
	I1211 00:17:55.595134   45025 start.go:360] acquireMachinesLock for functional-786978: {Name:mk5d633718b28dc32710e62bf470b68825cbd931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 00:17:55.595195   45025 start.go:364] duration metric: took 45.113µs to acquireMachinesLock for "functional-786978"
	I1211 00:17:55.595213   45025 start.go:96] Skipping create...Using existing machine configuration
	I1211 00:17:55.595217   45025 fix.go:54] fixHost starting: 
	I1211 00:17:55.595484   45025 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
	I1211 00:17:55.612234   45025 fix.go:112] recreateIfNeeded on functional-786978: state=Running err=<nil>
	W1211 00:17:55.612254   45025 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 00:17:55.615553   45025 out.go:252] * Updating the running docker "functional-786978" container ...
	I1211 00:17:55.615576   45025 machine.go:94] provisionDockerMachine start ...
	I1211 00:17:55.615650   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.633023   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.633331   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.633337   45025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 00:17:55.782629   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.782643   45025 ubuntu.go:182] provisioning hostname "functional-786978"
	I1211 00:17:55.782717   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.800268   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.800560   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.800569   45025 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-786978 && echo "functional-786978" | sudo tee /etc/hostname
	I1211 00:17:55.960068   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-786978
	
	I1211 00:17:55.960134   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:55.979369   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:55.979668   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:55.979683   45025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-786978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-786978/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-786978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 00:17:56.131539   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 00:17:56.131559   45025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 00:17:56.131581   45025 ubuntu.go:190] setting up certificates
	I1211 00:17:56.131589   45025 provision.go:84] configureAuth start
	I1211 00:17:56.131663   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:56.153195   45025 provision.go:143] copyHostCerts
	I1211 00:17:56.153275   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 00:17:56.153283   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 00:17:56.153368   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 00:17:56.153542   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 00:17:56.153546   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 00:17:56.153590   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 00:17:56.153677   45025 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 00:17:56.153682   45025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 00:17:56.153707   45025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 00:17:56.153777   45025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.functional-786978 san=[127.0.0.1 192.168.49.2 functional-786978 localhost minikube]
	I1211 00:17:56.467494   45025 provision.go:177] copyRemoteCerts
	I1211 00:17:56.467553   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 00:17:56.467596   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.484090   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:56.587917   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1211 00:17:56.605865   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 00:17:56.622832   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 00:17:56.639884   45025 provision.go:87] duration metric: took 508.274173ms to configureAuth
	I1211 00:17:56.639901   45025 ubuntu.go:206] setting minikube options for container-runtime
	I1211 00:17:56.640097   45025 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:17:56.640201   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:56.656951   45025 main.go:143] libmachine: Using SSH client type: native
	I1211 00:17:56.657259   45025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1211 00:17:56.657272   45025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 00:17:57.016039   45025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 00:17:57.016056   45025 machine.go:97] duration metric: took 1.400473029s to provisionDockerMachine
	I1211 00:17:57.016068   45025 start.go:293] postStartSetup for "functional-786978" (driver="docker")
	I1211 00:17:57.016080   45025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 00:17:57.016152   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 00:17:57.016210   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.035864   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.138938   45025 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 00:17:57.142378   45025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 00:17:57.142395   45025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 00:17:57.142405   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 00:17:57.142462   45025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 00:17:57.142546   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 00:17:57.142617   45025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts -> hosts in /etc/test/nested/copy/4875
	I1211 00:17:57.142658   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4875
	I1211 00:17:57.149965   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:57.167412   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts --> /etc/test/nested/copy/4875/hosts (40 bytes)
	I1211 00:17:57.184830   45025 start.go:296] duration metric: took 168.748285ms for postStartSetup
	I1211 00:17:57.184913   45025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:17:57.184954   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.203305   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.304245   45025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 00:17:57.309118   45025 fix.go:56] duration metric: took 1.713893936s for fixHost
	I1211 00:17:57.309133   45025 start.go:83] releasing machines lock for "functional-786978", held for 1.713931903s
	I1211 00:17:57.309206   45025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-786978
	I1211 00:17:57.326163   45025 ssh_runner.go:195] Run: cat /version.json
	I1211 00:17:57.326207   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.326441   45025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 00:17:57.326492   45025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
	I1211 00:17:57.346150   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.355283   45025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
	I1211 00:17:57.447048   45025 ssh_runner.go:195] Run: systemctl --version
	I1211 00:17:57.543733   45025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 00:17:57.583708   45025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 00:17:57.588962   45025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 00:17:57.589026   45025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 00:17:57.598123   45025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 00:17:57.598147   45025 start.go:496] detecting cgroup driver to use...
	I1211 00:17:57.598178   45025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 00:17:57.598242   45025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 00:17:57.616553   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 00:17:57.632037   45025 docker.go:218] disabling cri-docker service (if available) ...
	I1211 00:17:57.632116   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 00:17:57.648871   45025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 00:17:57.662555   45025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 00:17:57.780641   45025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 00:17:57.896253   45025 docker.go:234] disabling docker service ...
	I1211 00:17:57.896308   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 00:17:57.910709   45025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 00:17:57.923903   45025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 00:17:58.032234   45025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 00:17:58.154255   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 00:17:58.166925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 00:17:58.180565   45025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 00:17:58.180619   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.189311   45025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 00:17:58.189376   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.198596   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.207202   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.215908   45025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 00:17:58.223742   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.232864   45025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.241359   45025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 00:17:58.249993   45025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 00:17:58.257330   45025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 00:17:58.264525   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.395006   45025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 00:17:58.567132   45025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 00:17:58.567191   45025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 00:17:58.572106   45025 start.go:564] Will wait 60s for crictl version
	I1211 00:17:58.572166   45025 ssh_runner.go:195] Run: which crictl
	I1211 00:17:58.576600   45025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 00:17:58.605345   45025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 00:17:58.605434   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.635482   45025 ssh_runner.go:195] Run: crio --version
	I1211 00:17:58.670505   45025 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 00:17:58.673486   45025 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 00:17:58.691254   45025 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1211 00:17:58.698413   45025 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1211 00:17:58.701098   45025 kubeadm.go:884] updating cluster {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 00:17:58.701227   45025 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 00:17:58.701291   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.741056   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.741070   45025 crio.go:433] Images already preloaded, skipping extraction
	I1211 00:17:58.741127   45025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 00:17:58.766313   45025 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 00:17:58.766324   45025 cache_images.go:86] Images are preloaded, skipping loading
	I1211 00:17:58.766330   45025 kubeadm.go:935] updating node { 192.168.49.2  8441 v1.35.0-beta.0 crio true true} ...
	I1211 00:17:58.766420   45025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-786978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 00:17:58.766498   45025 ssh_runner.go:195] Run: crio config
	I1211 00:17:58.831179   45025 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1211 00:17:58.831214   45025 cni.go:84] Creating CNI manager for ""
	I1211 00:17:58.831224   45025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:17:58.831240   45025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 00:17:58.831262   45025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-786978 NodeName:functional-786978 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 00:17:58.831383   45025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-786978"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 00:17:58.831452   45025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 00:17:58.839023   45025 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 00:17:58.839084   45025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 00:17:58.846528   45025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1211 00:17:58.859010   45025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 00:17:58.871952   45025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1211 00:17:58.884395   45025 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1211 00:17:58.888346   45025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 00:17:58.999004   45025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 00:17:59.014620   45025 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978 for IP: 192.168.49.2
	I1211 00:17:59.014632   45025 certs.go:195] generating shared ca certs ...
	I1211 00:17:59.014647   45025 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 00:17:59.014834   45025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 00:17:59.014887   45025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 00:17:59.014894   45025 certs.go:257] generating profile certs ...
	I1211 00:17:59.015111   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.key
	I1211 00:17:59.015168   45025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key.47ae6169
	I1211 00:17:59.015206   45025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key
	I1211 00:17:59.015330   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 00:17:59.015361   45025 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 00:17:59.015369   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 00:17:59.015399   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 00:17:59.015424   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 00:17:59.015449   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 00:17:59.015495   45025 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 00:17:59.016236   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 00:17:59.036319   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 00:17:59.054207   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 00:17:59.085140   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 00:17:59.102589   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 00:17:59.119619   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 00:17:59.137775   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 00:17:59.155046   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 00:17:59.173200   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 00:17:59.191371   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 00:17:59.208847   45025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 00:17:59.225559   45025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 00:17:59.238258   45025 ssh_runner.go:195] Run: openssl version
	I1211 00:17:59.244279   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.251482   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 00:17:59.258806   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262560   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.262615   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 00:17:59.303500   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 00:17:59.310986   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.318422   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 00:17:59.325839   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329190   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.329239   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 00:17:59.369865   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 00:17:59.377731   45025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.385365   45025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 00:17:59.392850   45025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396464   45025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.396534   45025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 00:17:59.437551   45025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 00:17:59.445097   45025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 00:17:59.449099   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 00:17:59.490493   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 00:17:59.531562   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 00:17:59.572726   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 00:17:59.613479   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 00:17:59.656606   45025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 00:17:59.697483   45025 kubeadm.go:401] StartCluster: {Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:17:59.697558   45025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 00:17:59.697631   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.726147   45025 cri.go:89] found id: ""
	I1211 00:17:59.726208   45025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 00:17:59.734119   45025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 00:17:59.734129   45025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 00:17:59.734181   45025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 00:17:59.741669   45025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.742193   45025 kubeconfig.go:125] found "functional-786978" server: "https://192.168.49.2:8441"
	I1211 00:17:59.743487   45025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 00:17:59.751799   45025 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-11 00:03:23.654512319 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-11 00:17:58.880060835 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1211 00:17:59.751819   45025 kubeadm.go:1161] stopping kube-system containers ...
	I1211 00:17:59.751836   45025 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1211 00:17:59.751895   45025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 00:17:59.779633   45025 cri.go:89] found id: ""
	I1211 00:17:59.779698   45025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 00:17:59.796551   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:17:59.805010   45025 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 11 00:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 11 00:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 11 00:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 11 00:07 /etc/kubernetes/scheduler.conf
	
	I1211 00:17:59.805070   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:17:59.813093   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:17:59.820917   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.820973   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:17:59.828623   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.836494   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.836548   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:17:59.843945   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:17:59.851499   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 00:17:59.851553   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:17:59.859289   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:17:59.867193   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:17:59.916974   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.185880   45025 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.268883094s)
	I1211 00:18:02.185949   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.399533   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.467551   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 00:18:02.514148   45025 api_server.go:52] waiting for apiserver process to appear ...
	I1211 00:18:02.514234   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.014347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:03.515068   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.014554   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:04.515116   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.016511   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:05.515100   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.017684   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:06.515326   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.014433   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:07.515145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.014543   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:08.514950   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.015735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:09.514456   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.015825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:10.514630   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.015335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:11.514451   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.014804   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:12.514494   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.015458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:13.514452   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.014884   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:14.514333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.022420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:15.515034   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.017224   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:16.514464   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.015399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:17.514329   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.015271   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:18.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.017520   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:19.514376   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.017541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:20.515013   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.017761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:21.514358   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.014403   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:22.514344   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.017371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:23.515172   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.016422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:24.514490   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.020263   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:25.514922   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.014789   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:26.514345   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.015761   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:27.514955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.018541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:28.514310   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.014448   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:29.514337   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.018852   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:30.515041   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.020888   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:31.514298   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.022333   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:32.515045   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.014735   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:33.514347   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.017953   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:34.515070   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.015196   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:35.514355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.014375   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:36.514335   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.014528   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:37.514323   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.014416   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:38.515174   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.014438   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:39.514458   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.021545   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:40.514947   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.016088   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:41.514879   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.014943   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:42.514386   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.016904   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:43.515352   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.015231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:44.514894   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.014476   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:45.514778   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.016439   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:46.515114   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.014420   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:47.514853   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.016610   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:48.514436   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.014585   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:49.514442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.014533   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:50.514763   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.016122   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:51.514418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.015418   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:52.514462   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.014702   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:53.515080   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.015415   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:54.514399   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.016231   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:55.514627   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.015154   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:56.515225   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.020324   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:57.514495   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.015016   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:58.514389   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.018412   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:18:59.515094   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.018157   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:00.515152   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.014878   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:01.514507   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.015181   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:02.514444   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:02.514543   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:02.540506   45025 cri.go:89] found id: ""
	I1211 00:19:02.540520   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.540528   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:02.540533   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:02.540593   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:02.567414   45025 cri.go:89] found id: ""
	I1211 00:19:02.567427   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.567434   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:02.567439   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:02.567500   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:02.598249   45025 cri.go:89] found id: ""
	I1211 00:19:02.598263   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.598270   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:02.598277   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:02.598348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:02.624793   45025 cri.go:89] found id: ""
	I1211 00:19:02.624807   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.624822   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:02.624828   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:02.624894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:02.654153   45025 cri.go:89] found id: ""
	I1211 00:19:02.654170   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.654177   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:02.654182   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:02.654251   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:02.682217   45025 cri.go:89] found id: ""
	I1211 00:19:02.682231   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.682239   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:02.682244   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:02.682304   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:02.708660   45025 cri.go:89] found id: ""
	I1211 00:19:02.708674   45025 logs.go:282] 0 containers: []
	W1211 00:19:02.708682   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:02.708690   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:02.708700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:02.775902   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:02.775921   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:02.787446   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:02.787463   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:02.857001   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:02.848304   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.849154   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.850464   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.851108   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:02.852963   11018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:02.857011   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:02.857022   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:02.927792   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:02.927812   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:05.458523   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:05.468377   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:05.468436   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:05.492943   45025 cri.go:89] found id: ""
	I1211 00:19:05.492957   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.492963   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:05.492968   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:05.493030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:05.520504   45025 cri.go:89] found id: ""
	I1211 00:19:05.520517   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.520525   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:05.520530   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:05.520592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:05.551505   45025 cri.go:89] found id: ""
	I1211 00:19:05.551518   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.551525   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:05.551531   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:05.551586   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:05.580658   45025 cri.go:89] found id: ""
	I1211 00:19:05.580672   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.580679   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:05.580683   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:05.580757   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:05.607012   45025 cri.go:89] found id: ""
	I1211 00:19:05.607026   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.607033   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:05.607038   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:05.607102   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:05.632061   45025 cri.go:89] found id: ""
	I1211 00:19:05.632075   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.632082   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:05.632087   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:05.632152   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:05.658481   45025 cri.go:89] found id: ""
	I1211 00:19:05.658494   45025 logs.go:282] 0 containers: []
	W1211 00:19:05.658514   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:05.658522   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:05.658533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:05.724859   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:05.724876   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:05.735886   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:05.735901   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:05.798612   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:05.790382   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.791228   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.792958   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.793256   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:05.794777   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:05.798622   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:05.798634   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:05.867342   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:05.867360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:08.400995   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:08.413387   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:08.413449   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:08.448131   45025 cri.go:89] found id: ""
	I1211 00:19:08.448144   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.448151   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:08.448157   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:08.448216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:08.477588   45025 cri.go:89] found id: ""
	I1211 00:19:08.477601   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.477608   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:08.477612   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:08.477671   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:08.502742   45025 cri.go:89] found id: ""
	I1211 00:19:08.502755   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.502763   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:08.502768   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:08.502826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:08.528585   45025 cri.go:89] found id: ""
	I1211 00:19:08.528598   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.528606   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:08.528611   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:08.528674   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:08.559543   45025 cri.go:89] found id: ""
	I1211 00:19:08.559557   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.559564   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:08.559569   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:08.559630   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:08.585362   45025 cri.go:89] found id: ""
	I1211 00:19:08.585377   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.585384   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:08.585390   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:08.585462   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:08.611828   45025 cri.go:89] found id: ""
	I1211 00:19:08.611842   45025 logs.go:282] 0 containers: []
	W1211 00:19:08.611849   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:08.611856   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:08.611866   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:08.678470   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:08.678488   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:08.691361   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:08.691376   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:08.762621   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:08.753372   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.754349   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756016   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.756570   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:08.758134   11230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:08.762636   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:08.762649   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:08.832475   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:08.832493   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:11.361776   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:11.371640   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:11.371694   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:11.398476   45025 cri.go:89] found id: ""
	I1211 00:19:11.398489   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.398496   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:11.398501   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:11.398559   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:11.429955   45025 cri.go:89] found id: ""
	I1211 00:19:11.429969   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.429976   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:11.429982   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:11.430037   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:11.457296   45025 cri.go:89] found id: ""
	I1211 00:19:11.457309   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.457316   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:11.457324   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:11.457382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:11.482941   45025 cri.go:89] found id: ""
	I1211 00:19:11.482954   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.482962   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:11.483012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:11.483069   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:11.508408   45025 cri.go:89] found id: ""
	I1211 00:19:11.508431   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.508438   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:11.508443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:11.508510   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:11.533840   45025 cri.go:89] found id: ""
	I1211 00:19:11.533854   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.533869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:11.533875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:11.533950   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:11.559317   45025 cri.go:89] found id: ""
	I1211 00:19:11.559331   45025 logs.go:282] 0 containers: []
	W1211 00:19:11.559338   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:11.559345   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:11.559354   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:11.626027   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:11.626045   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:11.637884   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:11.637900   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:11.704689   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:11.695830   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.696271   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698133   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.698587   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:11.700260   11338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:11.704700   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:11.704711   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:11.774803   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:11.774821   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.306913   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:14.318077   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:14.318146   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:14.343407   45025 cri.go:89] found id: ""
	I1211 00:19:14.343421   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.343428   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:14.343433   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:14.343497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:14.370322   45025 cri.go:89] found id: ""
	I1211 00:19:14.370336   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.370342   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:14.370348   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:14.370406   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:14.397449   45025 cri.go:89] found id: ""
	I1211 00:19:14.397462   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.397469   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:14.397474   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:14.397531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:14.430459   45025 cri.go:89] found id: ""
	I1211 00:19:14.430472   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.430479   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:14.430501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:14.430595   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:14.461756   45025 cri.go:89] found id: ""
	I1211 00:19:14.461769   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.461776   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:14.461781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:14.461849   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:14.488174   45025 cri.go:89] found id: ""
	I1211 00:19:14.488189   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.488196   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:14.488201   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:14.488258   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:14.517330   45025 cri.go:89] found id: ""
	I1211 00:19:14.517343   45025 logs.go:282] 0 containers: []
	W1211 00:19:14.517350   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:14.517357   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:14.517368   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:14.549197   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:14.549215   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:14.618908   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:14.618926   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:14.630263   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:14.630279   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:14.698427   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:14.689915   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.690647   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692147   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.692709   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:14.694360   11456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:14.698437   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:14.698453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.273043   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:17.283257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:17.283323   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:17.308437   45025 cri.go:89] found id: ""
	I1211 00:19:17.308450   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.308457   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:17.308462   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:17.308522   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:17.337454   45025 cri.go:89] found id: ""
	I1211 00:19:17.337467   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.337474   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:17.337479   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:17.337538   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:17.363695   45025 cri.go:89] found id: ""
	I1211 00:19:17.363709   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.363717   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:17.363722   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:17.363781   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:17.388300   45025 cri.go:89] found id: ""
	I1211 00:19:17.388314   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.388321   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:17.388327   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:17.388383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:17.418934   45025 cri.go:89] found id: ""
	I1211 00:19:17.418947   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.418954   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:17.418959   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:17.419036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:17.453193   45025 cri.go:89] found id: ""
	I1211 00:19:17.453207   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.453214   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:17.453220   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:17.453308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:17.487806   45025 cri.go:89] found id: ""
	I1211 00:19:17.487820   45025 logs.go:282] 0 containers: []
	W1211 00:19:17.487827   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:17.487834   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:17.487845   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:17.553739   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:17.553758   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:17.564920   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:17.564936   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:17.630666   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:17.622390   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.622943   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.624723   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.625205   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:17.626694   11551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:17.630680   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:17.630705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:17.701596   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:17.701614   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:20.234880   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:20.244988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:20.245050   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:20.273088   45025 cri.go:89] found id: ""
	I1211 00:19:20.273101   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.273109   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:20.273114   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:20.273175   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:20.302062   45025 cri.go:89] found id: ""
	I1211 00:19:20.302076   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.302083   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:20.302089   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:20.302157   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:20.326827   45025 cri.go:89] found id: ""
	I1211 00:19:20.326841   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.326859   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:20.326865   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:20.326922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:20.356288   45025 cri.go:89] found id: ""
	I1211 00:19:20.356302   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.356309   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:20.356315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:20.356375   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:20.382358   45025 cri.go:89] found id: ""
	I1211 00:19:20.382373   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.382380   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:20.382386   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:20.382445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:20.417393   45025 cri.go:89] found id: ""
	I1211 00:19:20.417407   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.417424   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:20.417430   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:20.417488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:20.447521   45025 cri.go:89] found id: ""
	I1211 00:19:20.447534   45025 logs.go:282] 0 containers: []
	W1211 00:19:20.447541   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:20.447550   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:20.447560   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:20.518467   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:20.518484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:20.530666   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:20.530681   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:20.599280   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:20.590300   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.590949   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.592716   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.593240   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:20.594841   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:20.599290   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:20.599301   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:20.666760   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:20.666778   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.200454   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:23.210413   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:23.210471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:23.234734   45025 cri.go:89] found id: ""
	I1211 00:19:23.234748   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.234756   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:23.234761   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:23.234822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:23.260526   45025 cri.go:89] found id: ""
	I1211 00:19:23.260540   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.260547   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:23.260552   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:23.260611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:23.284278   45025 cri.go:89] found id: ""
	I1211 00:19:23.284291   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.284298   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:23.284303   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:23.284360   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:23.309416   45025 cri.go:89] found id: ""
	I1211 00:19:23.309431   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.309438   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:23.309443   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:23.309502   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:23.335667   45025 cri.go:89] found id: ""
	I1211 00:19:23.335682   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.335689   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:23.335695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:23.335751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:23.364847   45025 cri.go:89] found id: ""
	I1211 00:19:23.364862   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.364869   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:23.364875   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:23.364941   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:23.389436   45025 cri.go:89] found id: ""
	I1211 00:19:23.389449   45025 logs.go:282] 0 containers: []
	W1211 00:19:23.389457   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:23.389464   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:23.389477   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:23.402133   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:23.402149   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:23.484989   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:23.476467   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.477076   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.478767   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.479376   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:23.481018   11759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:23.484999   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:23.485010   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:23.553567   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:23.553586   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:23.583342   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:23.583359   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.151360   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:26.161613   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:26.161676   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:26.187432   45025 cri.go:89] found id: ""
	I1211 00:19:26.187446   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.187453   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:26.187459   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:26.187514   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:26.212567   45025 cri.go:89] found id: ""
	I1211 00:19:26.212581   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.212588   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:26.212593   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:26.212650   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:26.238347   45025 cri.go:89] found id: ""
	I1211 00:19:26.238359   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.238367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:26.238372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:26.238426   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:26.264493   45025 cri.go:89] found id: ""
	I1211 00:19:26.264506   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.264513   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:26.264518   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:26.264578   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:26.289421   45025 cri.go:89] found id: ""
	I1211 00:19:26.289435   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.289442   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:26.289446   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:26.289512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:26.317737   45025 cri.go:89] found id: ""
	I1211 00:19:26.317751   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.317758   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:26.317776   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:26.317832   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:26.342012   45025 cri.go:89] found id: ""
	I1211 00:19:26.342025   45025 logs.go:282] 0 containers: []
	W1211 00:19:26.342032   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:26.342039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:26.342049   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:26.409907   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:26.409925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:26.444709   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:26.444725   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:26.520673   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:26.520692   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:26.533201   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:26.533217   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:26.595360   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:26.586578   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.587614   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.588718   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.589353   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:26.591032   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.096255   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:29.106290   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:29.106348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:29.135863   45025 cri.go:89] found id: ""
	I1211 00:19:29.135876   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.135883   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:29.135888   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:29.135948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:29.162996   45025 cri.go:89] found id: ""
	I1211 00:19:29.163011   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.163018   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:29.163024   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:29.163104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:29.189722   45025 cri.go:89] found id: ""
	I1211 00:19:29.189738   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.189745   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:29.189749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:29.189834   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:29.215022   45025 cri.go:89] found id: ""
	I1211 00:19:29.215036   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.215042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:29.215047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:29.215106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:29.240657   45025 cri.go:89] found id: ""
	I1211 00:19:29.240671   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.240679   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:29.240684   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:29.240744   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:29.265406   45025 cri.go:89] found id: ""
	I1211 00:19:29.265420   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.265427   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:29.265432   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:29.265488   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:29.289115   45025 cri.go:89] found id: ""
	I1211 00:19:29.289128   45025 logs.go:282] 0 containers: []
	W1211 00:19:29.289136   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:29.289143   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:29.289154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:29.316627   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:29.316646   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:29.381873   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:29.381892   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:29.392836   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:29.392852   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:29.474052   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:29.464931   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.465626   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.466727   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.467263   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:29.469434   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:29.474062   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:29.474072   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.041538   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:32.052288   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:32.052353   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:32.078058   45025 cri.go:89] found id: ""
	I1211 00:19:32.078071   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.078078   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:32.078084   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:32.078143   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:32.104226   45025 cri.go:89] found id: ""
	I1211 00:19:32.104240   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.104251   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:32.104256   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:32.104315   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:32.130104   45025 cri.go:89] found id: ""
	I1211 00:19:32.130123   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.130130   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:32.130135   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:32.130196   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:32.156116   45025 cri.go:89] found id: ""
	I1211 00:19:32.156131   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.156138   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:32.156143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:32.156204   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:32.182027   45025 cri.go:89] found id: ""
	I1211 00:19:32.182039   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.182046   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:32.182051   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:32.182119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:32.206462   45025 cri.go:89] found id: ""
	I1211 00:19:32.206476   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.206483   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:32.206488   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:32.206553   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:32.230714   45025 cri.go:89] found id: ""
	I1211 00:19:32.230727   45025 logs.go:282] 0 containers: []
	W1211 00:19:32.230734   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:32.230757   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:32.230773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:32.295411   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:32.295430   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:32.306690   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:32.306705   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:32.373425   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:32.365328   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.366093   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367664   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.367991   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:32.369498   12072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:32.373435   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:32.373446   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:32.441247   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:32.441264   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:34.988442   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:34.998718   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:34.998785   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:35.036207   45025 cri.go:89] found id: ""
	I1211 00:19:35.036221   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.036231   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:35.036236   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:35.036298   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:35.062611   45025 cri.go:89] found id: ""
	I1211 00:19:35.062624   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.062631   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:35.062636   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:35.062692   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:35.089089   45025 cri.go:89] found id: ""
	I1211 00:19:35.089102   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.089109   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:35.089115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:35.089177   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:35.116537   45025 cri.go:89] found id: ""
	I1211 00:19:35.116550   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.116558   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:35.116564   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:35.116625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:35.141369   45025 cri.go:89] found id: ""
	I1211 00:19:35.141383   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.141390   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:35.141396   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:35.141464   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:35.167717   45025 cri.go:89] found id: ""
	I1211 00:19:35.167731   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.167738   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:35.167746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:35.167805   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:35.193275   45025 cri.go:89] found id: ""
	I1211 00:19:35.193288   45025 logs.go:282] 0 containers: []
	W1211 00:19:35.193295   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:35.193303   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:35.193313   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:35.223396   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:35.223412   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:35.291423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:35.291442   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:35.302744   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:35.302760   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:35.366712   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:35.358212   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.359116   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.360553   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.361241   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:35.362920   12193 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:35.366722   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:35.366732   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:37.940570   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:37.951183   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:37.951244   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:37.977384   45025 cri.go:89] found id: ""
	I1211 00:19:37.977412   45025 logs.go:282] 0 containers: []
	W1211 00:19:37.977419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:37.977425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:37.977489   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:38.002327   45025 cri.go:89] found id: ""
	I1211 00:19:38.002341   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.002349   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:38.002354   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:38.002433   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:38.032932   45025 cri.go:89] found id: ""
	I1211 00:19:38.032947   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.032955   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:38.032960   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:38.033023   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:38.060494   45025 cri.go:89] found id: ""
	I1211 00:19:38.060508   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.060516   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:38.060522   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:38.060584   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:38.090424   45025 cri.go:89] found id: ""
	I1211 00:19:38.090438   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.090445   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:38.090450   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:38.090511   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:38.117237   45025 cri.go:89] found id: ""
	I1211 00:19:38.117250   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.117258   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:38.117268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:38.117330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:38.144173   45025 cri.go:89] found id: ""
	I1211 00:19:38.144187   45025 logs.go:282] 0 containers: []
	W1211 00:19:38.144195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:38.144203   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:38.144213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:38.213450   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:38.213474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:38.224711   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:38.224727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:38.292623   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:38.283472   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.284379   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286045   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.286776   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:38.288562   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:38.292634   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:38.292644   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:38.360121   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:38.360139   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:40.897394   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:40.907308   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:40.907368   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:40.935845   45025 cri.go:89] found id: ""
	I1211 00:19:40.935861   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.935868   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:40.935874   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:40.935936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:40.961885   45025 cri.go:89] found id: ""
	I1211 00:19:40.961899   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.961906   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:40.961911   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:40.961972   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:40.992115   45025 cri.go:89] found id: ""
	I1211 00:19:40.992129   45025 logs.go:282] 0 containers: []
	W1211 00:19:40.992136   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:40.992141   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:40.992199   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:41.017243   45025 cri.go:89] found id: ""
	I1211 00:19:41.017259   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.017269   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:41.017274   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:41.017355   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:41.046002   45025 cri.go:89] found id: ""
	I1211 00:19:41.046016   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.046022   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:41.046027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:41.046097   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:41.072198   45025 cri.go:89] found id: ""
	I1211 00:19:41.072212   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.072220   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:41.072225   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:41.072297   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:41.097305   45025 cri.go:89] found id: ""
	I1211 00:19:41.097319   45025 logs.go:282] 0 containers: []
	W1211 00:19:41.097326   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:41.097352   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:41.097363   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:41.163075   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:41.163095   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:41.174199   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:41.174214   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:41.239512   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:41.230721   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.231373   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233326   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.233923   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:41.235478   12389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:41.239535   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:41.239556   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:41.311901   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:41.311918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:43.842688   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:43.853001   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:43.853061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:43.877321   45025 cri.go:89] found id: ""
	I1211 00:19:43.877335   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.877342   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:43.877347   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:43.877403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:43.905861   45025 cri.go:89] found id: ""
	I1211 00:19:43.905874   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.905882   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:43.905887   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:43.905948   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:43.931275   45025 cri.go:89] found id: ""
	I1211 00:19:43.931289   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.931309   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:43.931315   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:43.931383   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:43.957472   45025 cri.go:89] found id: ""
	I1211 00:19:43.957485   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.957492   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:43.957497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:43.957556   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:43.987995   45025 cri.go:89] found id: ""
	I1211 00:19:43.988009   45025 logs.go:282] 0 containers: []
	W1211 00:19:43.988016   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:43.988022   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:43.988082   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:44.015918   45025 cri.go:89] found id: ""
	I1211 00:19:44.015934   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.015942   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:44.015948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:44.016028   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:44.044784   45025 cri.go:89] found id: ""
	I1211 00:19:44.044797   45025 logs.go:282] 0 containers: []
	W1211 00:19:44.044804   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:44.044812   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:44.044825   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:44.111423   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:44.111440   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:44.122746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:44.122766   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:44.196525   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:44.187383   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.188265   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.189997   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.190570   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:44.192263   12492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:44.196536   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:44.196547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:44.264322   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:44.264340   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:46.797073   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:46.807248   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:46.807312   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:46.833629   45025 cri.go:89] found id: ""
	I1211 00:19:46.833643   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.833650   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:46.833656   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:46.833722   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:46.860316   45025 cri.go:89] found id: ""
	I1211 00:19:46.860329   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.860337   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:46.860342   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:46.860403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:46.886240   45025 cri.go:89] found id: ""
	I1211 00:19:46.886253   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.886261   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:46.886265   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:46.886324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:46.911538   45025 cri.go:89] found id: ""
	I1211 00:19:46.911552   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.911559   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:46.911565   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:46.911625   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:46.938014   45025 cri.go:89] found id: ""
	I1211 00:19:46.938029   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.938036   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:46.938041   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:46.938105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:46.965253   45025 cri.go:89] found id: ""
	I1211 00:19:46.965267   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.965274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:46.965279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:46.965339   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:46.991686   45025 cri.go:89] found id: ""
	I1211 00:19:46.991699   45025 logs.go:282] 0 containers: []
	W1211 00:19:46.991706   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:46.991714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:46.991727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:47.057610   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:47.057627   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:47.069235   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:47.069251   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:47.137186   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:47.128465   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130169   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.130718   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132215   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:47.132674   12599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:47.137197   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:47.137220   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:47.206375   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:47.206397   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:49.735135   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:49.745127   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:49.745191   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:49.770237   45025 cri.go:89] found id: ""
	I1211 00:19:49.770250   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.770257   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:49.770262   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:49.770319   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:49.795789   45025 cri.go:89] found id: ""
	I1211 00:19:49.795803   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.795810   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:49.795815   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:49.795872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:49.825306   45025 cri.go:89] found id: ""
	I1211 00:19:49.825319   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.825326   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:49.825331   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:49.825388   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:49.855190   45025 cri.go:89] found id: ""
	I1211 00:19:49.855204   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.855211   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:49.855216   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:49.855281   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:49.881199   45025 cri.go:89] found id: ""
	I1211 00:19:49.881212   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.881219   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:49.881224   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:49.881280   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:49.906616   45025 cri.go:89] found id: ""
	I1211 00:19:49.906629   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.906636   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:49.906641   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:49.906698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:49.933814   45025 cri.go:89] found id: ""
	I1211 00:19:49.933828   45025 logs.go:282] 0 containers: []
	W1211 00:19:49.933835   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:49.933842   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:49.933859   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:49.944994   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:49.945009   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:50.007164   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:49.998757   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:49.999612   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001336   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.001659   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:50.003262   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:50.007174   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:50.007184   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:50.077454   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:50.077472   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:50.110740   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:50.110757   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.683928   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:52.694104   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:52.694167   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:52.725399   45025 cri.go:89] found id: ""
	I1211 00:19:52.725413   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.725420   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:52.725425   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:52.725483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:52.751850   45025 cri.go:89] found id: ""
	I1211 00:19:52.751863   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.751870   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:52.751875   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:52.751937   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:52.780571   45025 cri.go:89] found id: ""
	I1211 00:19:52.780584   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.780591   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:52.780595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:52.780653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:52.809728   45025 cri.go:89] found id: ""
	I1211 00:19:52.809741   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.809748   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:52.809753   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:52.809808   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:52.834891   45025 cri.go:89] found id: ""
	I1211 00:19:52.834904   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.834910   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:52.834915   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:52.835007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:52.861606   45025 cri.go:89] found id: ""
	I1211 00:19:52.861619   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.861626   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:52.861631   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:52.861688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:52.888101   45025 cri.go:89] found id: ""
	I1211 00:19:52.888115   45025 logs.go:282] 0 containers: []
	W1211 00:19:52.888122   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:52.888130   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:52.888140   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:52.953090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:52.953108   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:52.964419   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:52.964435   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:53.034074   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:53.024818   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.025769   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027575   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.027878   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:53.029244   12803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:53.034091   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:53.034102   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:53.105399   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:53.105417   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.638422   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:55.648339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:55.648396   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:55.678848   45025 cri.go:89] found id: ""
	I1211 00:19:55.678868   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.678876   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:55.678884   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:55.678953   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:55.718935   45025 cri.go:89] found id: ""
	I1211 00:19:55.718959   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.718987   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:55.718992   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:55.719061   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:55.743738   45025 cri.go:89] found id: ""
	I1211 00:19:55.743751   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.743758   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:55.743763   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:55.743822   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:55.769117   45025 cri.go:89] found id: ""
	I1211 00:19:55.769130   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.769137   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:55.769143   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:55.769207   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:55.795500   45025 cri.go:89] found id: ""
	I1211 00:19:55.795529   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.795537   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:55.795542   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:55.795611   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:55.824959   45025 cri.go:89] found id: ""
	I1211 00:19:55.824972   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.824979   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:55.824984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:55.825042   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:55.850737   45025 cri.go:89] found id: ""
	I1211 00:19:55.850750   45025 logs.go:282] 0 containers: []
	W1211 00:19:55.850768   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:55.850776   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:55.850787   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:55.878584   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:55.878600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:55.943684   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:55.943701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:55.954898   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:55.954914   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:56.024872   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:56.012530   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.013266   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.017527   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.018053   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:56.019899   12920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:19:56.024883   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:56.024893   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.594636   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:19:58.605403   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:19:58.605467   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:19:58.637164   45025 cri.go:89] found id: ""
	I1211 00:19:58.637178   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.637189   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:19:58.637194   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:19:58.637252   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:19:58.682644   45025 cri.go:89] found id: ""
	I1211 00:19:58.682657   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.682664   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:19:58.682672   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:19:58.682728   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:19:58.714474   45025 cri.go:89] found id: ""
	I1211 00:19:58.714488   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.714495   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:19:58.714500   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:19:58.714558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:19:58.745457   45025 cri.go:89] found id: ""
	I1211 00:19:58.745470   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.745484   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:19:58.745489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:19:58.745545   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:19:58.771678   45025 cri.go:89] found id: ""
	I1211 00:19:58.771691   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.771704   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:19:58.771710   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:19:58.771770   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:19:58.796493   45025 cri.go:89] found id: ""
	I1211 00:19:58.796507   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.796514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:19:58.796519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:19:58.796576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:19:58.821870   45025 cri.go:89] found id: ""
	I1211 00:19:58.821884   45025 logs.go:282] 0 containers: []
	W1211 00:19:58.821892   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:19:58.821899   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:19:58.821909   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:19:58.894510   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:19:58.894537   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:19:58.927576   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:19:58.927595   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:19:58.994438   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:19:58.994455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:19:59.005360   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:19:59.005377   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:19:59.073100   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:19:59.064341   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.065009   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.066577   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.067304   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:19:59.068922   13025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.573622   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:01.584703   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:01.584773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:01.612873   45025 cri.go:89] found id: ""
	I1211 00:20:01.612888   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.612895   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:01.612901   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:01.612964   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:01.641246   45025 cri.go:89] found id: ""
	I1211 00:20:01.641259   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.641267   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:01.641272   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:01.641330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:01.670560   45025 cri.go:89] found id: ""
	I1211 00:20:01.670574   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.670582   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:01.670587   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:01.670652   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:01.697783   45025 cri.go:89] found id: ""
	I1211 00:20:01.697797   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.697804   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:01.697809   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:01.697870   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:01.724991   45025 cri.go:89] found id: ""
	I1211 00:20:01.725005   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.725013   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:01.725019   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:01.725078   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:01.751948   45025 cri.go:89] found id: ""
	I1211 00:20:01.751961   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.751969   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:01.751976   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:01.752036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:01.782191   45025 cri.go:89] found id: ""
	I1211 00:20:01.782204   45025 logs.go:282] 0 containers: []
	W1211 00:20:01.782211   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:01.782218   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:01.782228   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:01.849183   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:01.849203   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:01.863105   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:01.863127   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:01.948480   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:01.938141   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.939058   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.940693   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.941067   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:01.943687   13110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:01.948490   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:01.948501   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:02.031526   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:02.031546   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.563706   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:04.573944   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:04.573999   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:04.604222   45025 cri.go:89] found id: ""
	I1211 00:20:04.604235   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.604242   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:04.604247   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:04.604308   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:04.633340   45025 cri.go:89] found id: ""
	I1211 00:20:04.633353   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.633361   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:04.633365   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:04.633427   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:04.663258   45025 cri.go:89] found id: ""
	I1211 00:20:04.663289   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.663297   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:04.663302   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:04.663373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:04.690031   45025 cri.go:89] found id: ""
	I1211 00:20:04.690044   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.690051   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:04.690056   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:04.690112   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:04.716219   45025 cri.go:89] found id: ""
	I1211 00:20:04.716232   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.716240   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:04.716256   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:04.716317   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:04.742460   45025 cri.go:89] found id: ""
	I1211 00:20:04.742474   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.742481   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:04.742497   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:04.742564   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:04.774107   45025 cri.go:89] found id: ""
	I1211 00:20:04.774121   45025 logs.go:282] 0 containers: []
	W1211 00:20:04.774128   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:04.774136   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:04.774146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:04.806436   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:04.806453   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:04.872547   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:04.872566   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:04.884075   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:04.884092   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:04.982628   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:04.974417   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.974848   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.976500   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.977005   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:04.978639   13228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:04.982638   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:04.982650   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.551877   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:07.561860   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:07.561924   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:07.586162   45025 cri.go:89] found id: ""
	I1211 00:20:07.586175   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.586192   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:07.586198   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:07.586254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:07.611295   45025 cri.go:89] found id: ""
	I1211 00:20:07.611309   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.611316   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:07.611321   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:07.611377   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:07.637224   45025 cri.go:89] found id: ""
	I1211 00:20:07.637237   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.637245   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:07.637249   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:07.637306   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:07.666366   45025 cri.go:89] found id: ""
	I1211 00:20:07.666379   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.666386   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:07.666391   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:07.666451   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:07.691800   45025 cri.go:89] found id: ""
	I1211 00:20:07.691814   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.691822   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:07.691827   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:07.691885   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:07.717290   45025 cri.go:89] found id: ""
	I1211 00:20:07.717304   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.717321   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:07.717326   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:07.717382   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:07.747011   45025 cri.go:89] found id: ""
	I1211 00:20:07.747024   45025 logs.go:282] 0 containers: []
	W1211 00:20:07.747031   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:07.747039   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:07.747048   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:07.816300   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:07.816318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:07.850783   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:07.850798   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:07.920354   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:07.920371   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:07.932012   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:07.932027   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:07.996529   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:07.988795   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.989430   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991195   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.991824   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:07.992846   13342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.496978   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:10.507125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:10.507193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:10.532780   45025 cri.go:89] found id: ""
	I1211 00:20:10.532794   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.532801   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:10.532807   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:10.532863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:10.558194   45025 cri.go:89] found id: ""
	I1211 00:20:10.558207   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.558214   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:10.558219   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:10.558277   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:10.583482   45025 cri.go:89] found id: ""
	I1211 00:20:10.583496   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.583503   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:10.583508   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:10.583566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:10.608826   45025 cri.go:89] found id: ""
	I1211 00:20:10.608840   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.608847   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:10.608851   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:10.608910   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:10.637533   45025 cri.go:89] found id: ""
	I1211 00:20:10.637548   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.637554   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:10.637559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:10.637620   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:10.662448   45025 cri.go:89] found id: ""
	I1211 00:20:10.662463   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.662471   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:10.662478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:10.662535   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:10.688164   45025 cri.go:89] found id: ""
	I1211 00:20:10.688187   45025 logs.go:282] 0 containers: []
	W1211 00:20:10.688195   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:10.688203   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:10.688213   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:10.718946   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:10.718981   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:10.783972   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:10.783992   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:10.795392   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:10.795408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:10.862892   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:10.854617   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.855500   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857187   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.857491   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:10.859028   13438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:10.862901   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:10.862911   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.437541   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:13.447617   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:13.447679   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:13.473117   45025 cri.go:89] found id: ""
	I1211 00:20:13.473131   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.473139   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:13.473144   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:13.473200   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:13.498616   45025 cri.go:89] found id: ""
	I1211 00:20:13.498629   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.498636   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:13.498641   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:13.498698   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:13.525802   45025 cri.go:89] found id: ""
	I1211 00:20:13.525824   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.525832   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:13.525836   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:13.525904   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:13.552063   45025 cri.go:89] found id: ""
	I1211 00:20:13.552077   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.552084   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:13.552092   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:13.552153   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:13.576789   45025 cri.go:89] found id: ""
	I1211 00:20:13.576802   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.576809   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:13.576816   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:13.576872   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:13.602028   45025 cri.go:89] found id: ""
	I1211 00:20:13.602042   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.602059   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:13.602065   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:13.602120   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:13.629268   45025 cri.go:89] found id: ""
	I1211 00:20:13.629282   45025 logs.go:282] 0 containers: []
	W1211 00:20:13.629299   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:13.629307   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:13.629318   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:13.694395   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:13.694413   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:13.705346   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:13.705362   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:13.771138   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:13.763779   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.764177   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765651   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.765951   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:13.767333   13532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:13.771148   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:13.771158   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:13.842879   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:13.842896   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:16.379425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:16.389574   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:16.389639   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:16.414634   45025 cri.go:89] found id: ""
	I1211 00:20:16.414647   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.414654   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:16.414659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:16.414721   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:16.441274   45025 cri.go:89] found id: ""
	I1211 00:20:16.441287   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.441293   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:16.441298   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:16.441352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:16.466318   45025 cri.go:89] found id: ""
	I1211 00:20:16.466331   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.466338   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:16.466343   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:16.466399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:16.492814   45025 cri.go:89] found id: ""
	I1211 00:20:16.492827   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.492834   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:16.492839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:16.492894   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:16.518104   45025 cri.go:89] found id: ""
	I1211 00:20:16.518117   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.518125   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:16.518130   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:16.518193   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:16.543245   45025 cri.go:89] found id: ""
	I1211 00:20:16.543260   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.543267   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:16.543272   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:16.543331   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:16.567767   45025 cri.go:89] found id: ""
	I1211 00:20:16.567781   45025 logs.go:282] 0 containers: []
	W1211 00:20:16.567788   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:16.567795   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:16.567806   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:16.635880   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:16.635897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:16.647253   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:16.647269   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:16.711132   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:16.702714   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.703283   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.704806   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.705129   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:16.706573   13636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:16.711143   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:16.711154   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:16.781461   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:16.781479   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.312031   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:19.322411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:19.322469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:19.349102   45025 cri.go:89] found id: ""
	I1211 00:20:19.349116   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.349124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:19.349129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:19.349190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:19.373803   45025 cri.go:89] found id: ""
	I1211 00:20:19.373818   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.373825   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:19.373830   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:19.373891   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:19.402187   45025 cri.go:89] found id: ""
	I1211 00:20:19.402201   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.402208   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:19.402213   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:19.402274   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:19.427606   45025 cri.go:89] found id: ""
	I1211 00:20:19.427620   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.427628   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:19.427633   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:19.427693   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:19.452647   45025 cri.go:89] found id: ""
	I1211 00:20:19.452660   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.452667   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:19.452671   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:19.452732   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:19.482184   45025 cri.go:89] found id: ""
	I1211 00:20:19.482198   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.482205   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:19.482211   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:19.482266   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:19.508334   45025 cri.go:89] found id: ""
	I1211 00:20:19.508348   45025 logs.go:282] 0 containers: []
	W1211 00:20:19.508355   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:19.508369   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:19.508379   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:19.582679   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:19.582703   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:19.613878   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:19.613897   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:19.688185   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:19.688206   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:19.699902   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:19.699917   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:19.768799   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:19.760352   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.761106   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.762577   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.763047   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:19.764836   13755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.269027   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:22.278950   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:22.279030   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:22.303632   45025 cri.go:89] found id: ""
	I1211 00:20:22.303646   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.303653   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:22.303659   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:22.303714   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:22.329589   45025 cri.go:89] found id: ""
	I1211 00:20:22.329602   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.329647   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:22.329653   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:22.329707   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:22.359724   45025 cri.go:89] found id: ""
	I1211 00:20:22.359737   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.359744   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:22.359749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:22.359806   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:22.385684   45025 cri.go:89] found id: ""
	I1211 00:20:22.385697   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.385704   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:22.385709   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:22.385768   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:22.411515   45025 cri.go:89] found id: ""
	I1211 00:20:22.411529   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.411536   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:22.411541   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:22.411601   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:22.437841   45025 cri.go:89] found id: ""
	I1211 00:20:22.437858   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.437865   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:22.437870   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:22.437926   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:22.462799   45025 cri.go:89] found id: ""
	I1211 00:20:22.462812   45025 logs.go:282] 0 containers: []
	W1211 00:20:22.462819   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:22.462830   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:22.462840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:22.530683   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:22.530700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:22.541777   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:22.541792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:22.606464   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:22.597547   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.598381   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600239   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.600936   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:22.602587   13848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:22.606473   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:22.606484   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:22.675683   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:22.675704   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:25.205679   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:25.215714   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:25.215772   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:25.240624   45025 cri.go:89] found id: ""
	I1211 00:20:25.240637   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.240644   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:25.240650   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:25.240704   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:25.266729   45025 cri.go:89] found id: ""
	I1211 00:20:25.266743   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.266761   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:25.266766   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:25.266833   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:25.292270   45025 cri.go:89] found id: ""
	I1211 00:20:25.292284   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.292291   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:25.292296   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:25.292352   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:25.316988   45025 cri.go:89] found id: ""
	I1211 00:20:25.317013   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.317021   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:25.317027   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:25.317094   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:25.342079   45025 cri.go:89] found id: ""
	I1211 00:20:25.342092   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.342100   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:25.342105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:25.342166   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:25.369363   45025 cri.go:89] found id: ""
	I1211 00:20:25.369376   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.369383   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:25.369388   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:25.369445   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:25.395141   45025 cri.go:89] found id: ""
	I1211 00:20:25.395155   45025 logs.go:282] 0 containers: []
	W1211 00:20:25.395166   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:25.395173   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:25.395183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:25.459743   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:25.459761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:25.470311   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:25.470325   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:25.537864   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:25.529411   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.530644   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.531551   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533044   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:25.533492   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:25.537874   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:25.537884   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:25.605782   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:25.605800   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:28.140709   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:28.152210   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:28.152270   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:28.192161   45025 cri.go:89] found id: ""
	I1211 00:20:28.192175   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.192182   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:28.192188   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:28.192254   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:28.226107   45025 cri.go:89] found id: ""
	I1211 00:20:28.226121   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.226128   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:28.226133   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:28.226190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:28.252351   45025 cri.go:89] found id: ""
	I1211 00:20:28.252364   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.252371   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:28.252376   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:28.252437   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:28.277856   45025 cri.go:89] found id: ""
	I1211 00:20:28.277869   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.277876   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:28.277882   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:28.277942   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:28.303425   45025 cri.go:89] found id: ""
	I1211 00:20:28.303442   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.303449   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:28.303454   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:28.303533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:28.327952   45025 cri.go:89] found id: ""
	I1211 00:20:28.327965   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.327973   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:28.327978   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:28.328036   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:28.352541   45025 cri.go:89] found id: ""
	I1211 00:20:28.352556   45025 logs.go:282] 0 containers: []
	W1211 00:20:28.352563   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:28.352571   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:28.352581   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:28.417587   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:28.417606   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:28.428990   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:28.429005   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:28.493232   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:28.484652   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.485464   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.486957   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.487563   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:28.489177   14059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:28.493242   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:28.493252   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:28.561239   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:28.561257   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.093955   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:31.104422   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:31.104484   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:31.130996   45025 cri.go:89] found id: ""
	I1211 00:20:31.131011   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.131018   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:31.131023   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:31.131088   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:31.170443   45025 cri.go:89] found id: ""
	I1211 00:20:31.170457   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.170465   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:31.170470   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:31.170531   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:31.204748   45025 cri.go:89] found id: ""
	I1211 00:20:31.204769   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.204777   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:31.204781   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:31.204846   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:31.235573   45025 cri.go:89] found id: ""
	I1211 00:20:31.235587   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.235594   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:31.235606   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:31.235664   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:31.260669   45025 cri.go:89] found id: ""
	I1211 00:20:31.260683   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.260690   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:31.260695   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:31.260753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:31.286253   45025 cri.go:89] found id: ""
	I1211 00:20:31.286267   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.286274   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:31.286279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:31.286338   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:31.313885   45025 cri.go:89] found id: ""
	I1211 00:20:31.313903   45025 logs.go:282] 0 containers: []
	W1211 00:20:31.313910   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:31.313917   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:31.313928   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:31.376250   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:31.368298   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.368737   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370334   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.370690   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:31.372228   14157 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:31.376260   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:31.376271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:31.445930   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:31.445948   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:31.477909   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:31.477923   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:31.547558   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:31.547575   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.060343   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:34.071407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:34.071468   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:34.097367   45025 cri.go:89] found id: ""
	I1211 00:20:34.097381   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.097389   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:34.097394   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:34.097455   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:34.125233   45025 cri.go:89] found id: ""
	I1211 00:20:34.125246   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.125253   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:34.125258   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:34.125313   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:34.152711   45025 cri.go:89] found id: ""
	I1211 00:20:34.152724   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.152731   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:34.152735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:34.152797   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:34.183533   45025 cri.go:89] found id: ""
	I1211 00:20:34.183547   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.183553   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:34.183559   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:34.183627   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:34.212367   45025 cri.go:89] found id: ""
	I1211 00:20:34.212379   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.212386   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:34.212392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:34.212450   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:34.239991   45025 cri.go:89] found id: ""
	I1211 00:20:34.240005   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.240012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:34.240017   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:34.240084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:34.265795   45025 cri.go:89] found id: ""
	I1211 00:20:34.265809   45025 logs.go:282] 0 containers: []
	W1211 00:20:34.265816   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:34.265823   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:34.265833   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:34.335452   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:34.335471   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:34.366714   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:34.366729   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:34.434761   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:34.434779   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:34.445767   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:34.445782   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:34.513054   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:34.504869   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.505538   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507123   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.507566   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:34.509155   14282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.014301   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:37.029619   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:37.029688   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:37.061510   45025 cri.go:89] found id: ""
	I1211 00:20:37.061525   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.061533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:37.061539   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:37.061597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:37.087429   45025 cri.go:89] found id: ""
	I1211 00:20:37.087442   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.087449   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:37.087454   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:37.087513   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:37.113865   45025 cri.go:89] found id: ""
	I1211 00:20:37.113878   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.113885   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:37.113890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:37.113951   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:37.139634   45025 cri.go:89] found id: ""
	I1211 00:20:37.139647   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.139655   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:37.139659   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:37.139723   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:37.177513   45025 cri.go:89] found id: ""
	I1211 00:20:37.177527   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.177535   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:37.177540   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:37.177599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:37.207209   45025 cri.go:89] found id: ""
	I1211 00:20:37.207223   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.207230   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:37.207235   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:37.207291   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:37.235860   45025 cri.go:89] found id: ""
	I1211 00:20:37.235874   45025 logs.go:282] 0 containers: []
	W1211 00:20:37.235880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:37.235888   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:37.235898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:37.302242   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:37.302260   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:37.313364   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:37.313380   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:37.383109   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:37.374337   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.375266   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377112   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.377485   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:37.378635   14377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:37.383119   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:37.383134   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:37.452480   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:37.452497   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:39.981534   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:39.992011   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:39.992074   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:40.037108   45025 cri.go:89] found id: ""
	I1211 00:20:40.037123   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.037131   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:40.037137   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:40.037205   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:40.073935   45025 cri.go:89] found id: ""
	I1211 00:20:40.073950   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.073958   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:40.073963   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:40.074024   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:40.103233   45025 cri.go:89] found id: ""
	I1211 00:20:40.103247   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.103255   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:40.103260   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:40.103324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:40.130384   45025 cri.go:89] found id: ""
	I1211 00:20:40.130398   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.130405   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:40.130411   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:40.130482   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:40.168123   45025 cri.go:89] found id: ""
	I1211 00:20:40.168137   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.168143   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:40.168149   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:40.168209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:40.206729   45025 cri.go:89] found id: ""
	I1211 00:20:40.206743   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.206750   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:40.206755   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:40.206814   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:40.237917   45025 cri.go:89] found id: ""
	I1211 00:20:40.237930   45025 logs.go:282] 0 containers: []
	W1211 00:20:40.237937   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:40.237945   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:40.237954   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:40.306231   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:40.306249   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:40.335237   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:40.335256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:40.407102   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:40.407124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:40.418948   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:40.418987   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:40.487059   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:40.478492   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.479144   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.480687   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.481126   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:40.482826   14495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:42.987371   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:42.997627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:42.997687   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:43.034834   45025 cri.go:89] found id: ""
	I1211 00:20:43.034847   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.034854   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:43.034858   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:43.034917   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:43.061014   45025 cri.go:89] found id: ""
	I1211 00:20:43.061028   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.061035   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:43.061040   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:43.061111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:43.086728   45025 cri.go:89] found id: ""
	I1211 00:20:43.086742   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.086749   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:43.086754   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:43.086815   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:43.112537   45025 cri.go:89] found id: ""
	I1211 00:20:43.112551   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.112557   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:43.112563   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:43.112619   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:43.138331   45025 cri.go:89] found id: ""
	I1211 00:20:43.138358   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.138365   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:43.138370   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:43.138440   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:43.177883   45025 cri.go:89] found id: ""
	I1211 00:20:43.177895   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.177902   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:43.177908   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:43.177976   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:43.208963   45025 cri.go:89] found id: ""
	I1211 00:20:43.208976   45025 logs.go:282] 0 containers: []
	W1211 00:20:43.208984   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:43.208991   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:43.209001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:43.276100   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:43.276119   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:43.287251   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:43.287266   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:43.358374   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:43.348831   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.349609   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.351499   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.352264   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:43.353789   14589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:43.358389   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:43.358399   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:43.430845   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:43.430863   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:45.960980   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:45.971128   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:45.971189   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:45.997483   45025 cri.go:89] found id: ""
	I1211 00:20:45.997497   45025 logs.go:282] 0 containers: []
	W1211 00:20:45.997504   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:45.997509   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:45.997566   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:46.030243   45025 cri.go:89] found id: ""
	I1211 00:20:46.030257   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.030265   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:46.030280   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:46.030341   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:46.057812   45025 cri.go:89] found id: ""
	I1211 00:20:46.057826   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.057834   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:46.057839   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:46.057896   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:46.094313   45025 cri.go:89] found id: ""
	I1211 00:20:46.094326   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.094334   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:46.094339   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:46.094403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:46.120781   45025 cri.go:89] found id: ""
	I1211 00:20:46.120796   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.120803   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:46.120808   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:46.120867   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:46.153078   45025 cri.go:89] found id: ""
	I1211 00:20:46.153091   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.153099   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:46.153105   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:46.153164   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:46.184025   45025 cri.go:89] found id: ""
	I1211 00:20:46.184038   45025 logs.go:282] 0 containers: []
	W1211 00:20:46.184045   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:46.184052   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:46.184065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:46.195376   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:46.195391   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:46.264561   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:46.255814   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.256505   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258288   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.258859   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:46.260582   14694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:46.264571   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:46.264583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:46.334575   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:46.334592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:46.365686   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:46.365701   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:48.932730   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:48.943221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:48.943289   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:48.970754   45025 cri.go:89] found id: ""
	I1211 00:20:48.970769   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.970775   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:48.970781   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:48.970851   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:48.998179   45025 cri.go:89] found id: ""
	I1211 00:20:48.998193   45025 logs.go:282] 0 containers: []
	W1211 00:20:48.998200   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:48.998205   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:48.998265   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:49.027459   45025 cri.go:89] found id: ""
	I1211 00:20:49.027472   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.027485   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:49.027490   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:49.027554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:49.053666   45025 cri.go:89] found id: ""
	I1211 00:20:49.053693   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.053700   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:49.053705   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:49.053773   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:49.080140   45025 cri.go:89] found id: ""
	I1211 00:20:49.080155   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.080162   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:49.080167   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:49.080223   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:49.106258   45025 cri.go:89] found id: ""
	I1211 00:20:49.106281   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.106289   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:49.106294   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:49.106362   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:49.131929   45025 cri.go:89] found id: ""
	I1211 00:20:49.131952   45025 logs.go:282] 0 containers: []
	W1211 00:20:49.131960   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:49.131967   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:49.131978   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:49.216291   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:49.216315   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:49.247289   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:49.247308   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:49.319005   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:49.319026   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:49.330154   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:49.330171   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:49.399415   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:49.391075   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.391774   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393364   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.393977   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:49.395497   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:51.899678   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:51.910510   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:51.910571   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:51.941358   45025 cri.go:89] found id: ""
	I1211 00:20:51.941372   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.941379   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:51.941384   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:51.941441   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:51.972273   45025 cri.go:89] found id: ""
	I1211 00:20:51.972287   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.972295   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:51.972300   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:51.972357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:51.998172   45025 cri.go:89] found id: ""
	I1211 00:20:51.998184   45025 logs.go:282] 0 containers: []
	W1211 00:20:51.998191   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:51.998197   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:51.998256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:52.028439   45025 cri.go:89] found id: ""
	I1211 00:20:52.028453   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.028460   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:52.028465   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:52.028526   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:52.060485   45025 cri.go:89] found id: ""
	I1211 00:20:52.060500   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.060508   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:52.060513   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:52.060574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:52.093990   45025 cri.go:89] found id: ""
	I1211 00:20:52.094005   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.094012   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:52.094018   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:52.094084   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:52.122577   45025 cri.go:89] found id: ""
	I1211 00:20:52.122592   45025 logs.go:282] 0 containers: []
	W1211 00:20:52.122599   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:52.122606   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:52.122624   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:52.191378   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:52.191396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:52.203404   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:52.203421   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:52.272572   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:52.264376   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.265030   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.266609   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.267024   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:52.268628   14912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:52.272582   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:52.272592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:52.340655   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:52.340672   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:54.871996   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:54.882238   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:54.882299   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:54.908417   45025 cri.go:89] found id: ""
	I1211 00:20:54.908430   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.908437   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:54.908442   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:54.908512   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:54.937462   45025 cri.go:89] found id: ""
	I1211 00:20:54.937475   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.937482   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:54.937487   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:54.937547   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:54.965546   45025 cri.go:89] found id: ""
	I1211 00:20:54.965560   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.965567   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:54.965572   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:54.965629   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:54.991381   45025 cri.go:89] found id: ""
	I1211 00:20:54.991395   45025 logs.go:282] 0 containers: []
	W1211 00:20:54.991403   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:54.991407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:54.991469   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:55.023225   45025 cri.go:89] found id: ""
	I1211 00:20:55.023243   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.023251   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:55.023257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:55.023340   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:55.069033   45025 cri.go:89] found id: ""
	I1211 00:20:55.069049   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.069056   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:55.069062   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:55.069130   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:55.104401   45025 cri.go:89] found id: ""
	I1211 00:20:55.104417   45025 logs.go:282] 0 containers: []
	W1211 00:20:55.104424   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:55.104432   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:55.104444   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:55.117919   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:55.117939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:55.207253   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:55.195947   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.196982   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198004   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.198732   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:55.202921   15010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:20:55.207264   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:55.207275   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:55.285978   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:55.286001   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:55.318311   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:55.318327   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:57.883510   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:20:57.893407   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:20:57.893478   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:20:57.918657   45025 cri.go:89] found id: ""
	I1211 00:20:57.918670   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.918677   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:20:57.918684   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:20:57.918739   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:20:57.944248   45025 cri.go:89] found id: ""
	I1211 00:20:57.944261   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.944268   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:20:57.944274   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:20:57.944337   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:20:57.969321   45025 cri.go:89] found id: ""
	I1211 00:20:57.969335   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.969342   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:20:57.969347   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:20:57.969403   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:20:57.994466   45025 cri.go:89] found id: ""
	I1211 00:20:57.994482   45025 logs.go:282] 0 containers: []
	W1211 00:20:57.994490   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:20:57.994495   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:20:57.994554   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:20:58.021937   45025 cri.go:89] found id: ""
	I1211 00:20:58.021954   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.021962   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:20:58.021967   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:20:58.022033   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:20:58.048826   45025 cri.go:89] found id: ""
	I1211 00:20:58.048840   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.048848   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:20:58.048854   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:20:58.048912   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:20:58.077218   45025 cri.go:89] found id: ""
	I1211 00:20:58.077231   45025 logs.go:282] 0 containers: []
	W1211 00:20:58.077239   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:20:58.077246   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:20:58.077256   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:20:58.145681   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:20:58.145698   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:20:58.191796   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:20:58.191814   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:20:58.268737   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:20:58.268756   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:20:58.280057   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:20:58.280074   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:20:58.347775   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:20:58.339056   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.339797   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.341564   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.342165   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:20:58.343664   15138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:00.848653   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:00.859447   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:00.859507   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:00.885107   45025 cri.go:89] found id: ""
	I1211 00:21:00.885123   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.885130   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:00.885136   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:00.885195   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:00.916160   45025 cri.go:89] found id: ""
	I1211 00:21:00.916174   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.916181   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:00.916186   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:00.916242   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:00.941904   45025 cri.go:89] found id: ""
	I1211 00:21:00.941918   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.941926   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:00.941931   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:00.941996   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:00.969553   45025 cri.go:89] found id: ""
	I1211 00:21:00.969566   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.969573   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:00.969579   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:00.969640   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:00.995856   45025 cri.go:89] found id: ""
	I1211 00:21:00.995869   45025 logs.go:282] 0 containers: []
	W1211 00:21:00.995876   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:00.995881   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:00.995936   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:01.023643   45025 cri.go:89] found id: ""
	I1211 00:21:01.023672   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.023679   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:01.023685   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:01.023753   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:01.049959   45025 cri.go:89] found id: ""
	I1211 00:21:01.049972   45025 logs.go:282] 0 containers: []
	W1211 00:21:01.049979   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:01.049986   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:01.049996   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:01.117206   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:01.117224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:01.129158   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:01.129174   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:01.221837   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:01.209229   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.213702   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.214339   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216100   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:01.216652   15224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:01.221848   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:01.221858   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:01.292030   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:01.292052   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:03.824471   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:03.834984   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:03.835048   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:03.865620   45025 cri.go:89] found id: ""
	I1211 00:21:03.865633   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.865640   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:03.865646   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:03.865706   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:03.894960   45025 cri.go:89] found id: ""
	I1211 00:21:03.895000   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.895012   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:03.895018   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:03.895093   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:03.922002   45025 cri.go:89] found id: ""
	I1211 00:21:03.922016   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.922033   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:03.922039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:03.922114   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:03.949011   45025 cri.go:89] found id: ""
	I1211 00:21:03.949025   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.949032   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:03.949037   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:03.949104   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:03.979941   45025 cri.go:89] found id: ""
	I1211 00:21:03.979955   45025 logs.go:282] 0 containers: []
	W1211 00:21:03.979983   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:03.979988   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:03.980056   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:04.005356   45025 cri.go:89] found id: ""
	I1211 00:21:04.005379   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.005386   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:04.005392   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:04.005498   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:04.036172   45025 cri.go:89] found id: ""
	I1211 00:21:04.036193   45025 logs.go:282] 0 containers: []
	W1211 00:21:04.036201   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:04.036210   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:04.036224   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:04.075735   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:04.075754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:04.141955   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:04.141976   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:04.154375   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:04.154390   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:04.236732   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:04.226754   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.227581   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.230678   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.231221   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:04.232796   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:04.236744   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:04.236754   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:06.812855   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:06.823280   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:06.823348   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:06.849675   45025 cri.go:89] found id: ""
	I1211 00:21:06.849689   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.849696   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:06.849701   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:06.849760   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:06.876012   45025 cri.go:89] found id: ""
	I1211 00:21:06.876026   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.876033   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:06.876038   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:06.876095   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:06.901644   45025 cri.go:89] found id: ""
	I1211 00:21:06.901658   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.901664   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:06.901669   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:06.901726   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:06.926863   45025 cri.go:89] found id: ""
	I1211 00:21:06.926877   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.926885   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:06.926890   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:06.926946   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:06.956891   45025 cri.go:89] found id: ""
	I1211 00:21:06.956905   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.956912   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:06.956917   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:06.956978   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:06.981741   45025 cri.go:89] found id: ""
	I1211 00:21:06.981754   45025 logs.go:282] 0 containers: []
	W1211 00:21:06.981762   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:06.981767   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:06.981826   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:07.007640   45025 cri.go:89] found id: ""
	I1211 00:21:07.007653   45025 logs.go:282] 0 containers: []
	W1211 00:21:07.007660   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:07.007666   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:07.007678   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:07.076566   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:07.076583   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:07.087895   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:07.087910   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:07.159453   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:07.146952   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.147699   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.149250   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.150203   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:07.152456   15438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:07.159463   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:07.159474   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:07.242834   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:07.242853   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:09.772607   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:09.782749   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:09.782809   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:09.809021   45025 cri.go:89] found id: ""
	I1211 00:21:09.809035   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.809042   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:09.809048   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:09.809106   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:09.837599   45025 cri.go:89] found id: ""
	I1211 00:21:09.837612   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.837619   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:09.837624   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:09.837681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:09.865754   45025 cri.go:89] found id: ""
	I1211 00:21:09.865767   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.865775   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:09.865780   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:09.865841   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:09.890922   45025 cri.go:89] found id: ""
	I1211 00:21:09.890936   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.890943   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:09.890948   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:09.891034   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:09.916087   45025 cri.go:89] found id: ""
	I1211 00:21:09.916100   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.916108   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:09.916113   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:09.916169   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:09.941494   45025 cri.go:89] found id: ""
	I1211 00:21:09.941507   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.941514   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:09.941520   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:09.941574   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:09.967438   45025 cri.go:89] found id: ""
	I1211 00:21:09.967452   45025 logs.go:282] 0 containers: []
	W1211 00:21:09.967460   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:09.967467   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:09.967478   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:10.042566   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:10.032083   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.032850   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.035627   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.036166   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:10.037948   15539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:10.042577   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:10.042589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:10.114716   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:10.114734   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:10.147711   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:10.147727   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:10.216212   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:10.216230   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:12.728208   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:12.738793   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:12.738852   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:12.765512   45025 cri.go:89] found id: ""
	I1211 00:21:12.765527   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.765534   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:12.765540   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:12.765599   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:12.792241   45025 cri.go:89] found id: ""
	I1211 00:21:12.792254   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.792261   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:12.792266   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:12.792326   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:12.821945   45025 cri.go:89] found id: ""
	I1211 00:21:12.821959   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.821966   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:12.821971   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:12.822029   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:12.847567   45025 cri.go:89] found id: ""
	I1211 00:21:12.847581   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.847588   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:12.847593   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:12.847649   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:12.873684   45025 cri.go:89] found id: ""
	I1211 00:21:12.873699   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.873706   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:12.873711   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:12.873769   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:12.899211   45025 cri.go:89] found id: ""
	I1211 00:21:12.899225   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.899233   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:12.899241   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:12.899301   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:12.925366   45025 cri.go:89] found id: ""
	I1211 00:21:12.925380   45025 logs.go:282] 0 containers: []
	W1211 00:21:12.925387   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:12.925395   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:12.925408   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:12.992650   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:12.992667   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:13.004006   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:13.004021   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:13.070046   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:13.060977   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.061855   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.063662   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.064880   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:13.065622   15651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:13.070055   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:13.070065   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:13.137969   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:13.137986   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:15.678794   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:15.688954   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:15.689022   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:15.714099   45025 cri.go:89] found id: ""
	I1211 00:21:15.714113   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.714120   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:15.714125   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:15.714190   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:15.738722   45025 cri.go:89] found id: ""
	I1211 00:21:15.738735   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.738742   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:15.738747   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:15.738801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:15.764238   45025 cri.go:89] found id: ""
	I1211 00:21:15.764251   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.764258   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:15.764269   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:15.764330   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:15.789987   45025 cri.go:89] found id: ""
	I1211 00:21:15.790000   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.790007   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:15.790012   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:15.790066   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:15.815536   45025 cri.go:89] found id: ""
	I1211 00:21:15.815549   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.815556   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:15.815567   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:15.815626   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:15.840404   45025 cri.go:89] found id: ""
	I1211 00:21:15.840424   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.840433   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:15.840438   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:15.840497   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:15.865028   45025 cri.go:89] found id: ""
	I1211 00:21:15.865041   45025 logs.go:282] 0 containers: []
	W1211 00:21:15.865048   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:15.865054   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:15.865064   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:15.930832   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:15.930850   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:15.942270   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:15.942285   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:16.008579   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:16.000061   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.000957   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.002703   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.003145   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:16.004633   15757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:16.008589   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:16.008600   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:16.086023   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:16.086047   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.616564   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:18.627177   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:18.627235   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:18.655749   45025 cri.go:89] found id: ""
	I1211 00:21:18.655763   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.655771   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:18.655776   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:18.655838   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:18.685932   45025 cri.go:89] found id: ""
	I1211 00:21:18.685946   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.685953   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:18.685958   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:18.686019   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:18.713761   45025 cri.go:89] found id: ""
	I1211 00:21:18.713775   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.713783   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:18.713788   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:18.713847   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:18.740458   45025 cri.go:89] found id: ""
	I1211 00:21:18.740472   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.740480   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:18.740485   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:18.740540   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:18.766011   45025 cri.go:89] found id: ""
	I1211 00:21:18.766025   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.766032   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:18.766036   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:18.766092   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:18.791387   45025 cri.go:89] found id: ""
	I1211 00:21:18.791401   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.791409   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:18.791414   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:18.791471   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:18.817326   45025 cri.go:89] found id: ""
	I1211 00:21:18.817340   45025 logs.go:282] 0 containers: []
	W1211 00:21:18.817347   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:18.817354   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:18.817366   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:18.885570   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:18.876789   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.877521   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879346   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.879866   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:18.881477   15856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:18.885581   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:18.885592   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:18.953656   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:18.953674   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:18.981613   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:18.981629   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:19.048252   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:19.048271   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.561008   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:21.571125   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:21.571184   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:21.597491   45025 cri.go:89] found id: ""
	I1211 00:21:21.597505   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.597512   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:21.597520   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:21.597576   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:21.623022   45025 cri.go:89] found id: ""
	I1211 00:21:21.623040   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.623047   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:21.623052   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:21.623109   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:21.648127   45025 cri.go:89] found id: ""
	I1211 00:21:21.648141   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.648148   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:21.648154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:21.648212   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:21.673563   45025 cri.go:89] found id: ""
	I1211 00:21:21.673577   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.673584   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:21.673589   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:21.673646   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:21.701744   45025 cri.go:89] found id: ""
	I1211 00:21:21.701757   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.701764   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:21.701769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:21.701830   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:21.727163   45025 cri.go:89] found id: ""
	I1211 00:21:21.727177   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.727184   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:21.727189   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:21.727247   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:21.753680   45025 cri.go:89] found id: ""
	I1211 00:21:21.753694   45025 logs.go:282] 0 containers: []
	W1211 00:21:21.753702   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:21.753709   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:21.753720   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:21.764845   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:21.764862   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:21.825854   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:21.817363   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.818281   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819353   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.819861   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:21.821515   15967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:21.825865   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:21.825877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:21.895180   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:21.895198   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:21.923512   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:21.923530   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.495120   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:24.505627   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:24.505701   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:24.532103   45025 cri.go:89] found id: ""
	I1211 00:21:24.532117   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.532124   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:24.532129   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:24.532183   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:24.561426   45025 cri.go:89] found id: ""
	I1211 00:21:24.561439   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.561447   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:24.561451   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:24.561509   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:24.591493   45025 cri.go:89] found id: ""
	I1211 00:21:24.591506   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.591514   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:24.591519   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:24.591582   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:24.618513   45025 cri.go:89] found id: ""
	I1211 00:21:24.618527   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.618534   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:24.618539   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:24.618596   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:24.644876   45025 cri.go:89] found id: ""
	I1211 00:21:24.644890   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.644899   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:24.644904   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:24.644963   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:24.674148   45025 cri.go:89] found id: ""
	I1211 00:21:24.674161   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.674168   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:24.674174   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:24.674236   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:24.700184   45025 cri.go:89] found id: ""
	I1211 00:21:24.700198   45025 logs.go:282] 0 containers: []
	W1211 00:21:24.700205   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:24.700212   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:24.700222   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:24.765329   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:24.765346   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:24.776593   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:24.776608   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:24.844320   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:24.835015   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.835816   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.837638   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.838286   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:24.839842   16073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:24.844329   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:24.844342   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:24.912094   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:24.912111   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.443355   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:27.454562   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:27.454628   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:27.480512   45025 cri.go:89] found id: ""
	I1211 00:21:27.480526   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.480533   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:27.480538   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:27.480604   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:27.507028   45025 cri.go:89] found id: ""
	I1211 00:21:27.507041   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.507049   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:27.507054   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:27.507111   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:27.533346   45025 cri.go:89] found id: ""
	I1211 00:21:27.533360   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.533367   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:27.533372   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:27.533435   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:27.563021   45025 cri.go:89] found id: ""
	I1211 00:21:27.563034   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.563042   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:27.563047   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:27.563105   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:27.587813   45025 cri.go:89] found id: ""
	I1211 00:21:27.587831   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.587838   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:27.587843   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:27.587900   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:27.616925   45025 cri.go:89] found id: ""
	I1211 00:21:27.616938   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.616945   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:27.616951   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:27.617007   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:27.642256   45025 cri.go:89] found id: ""
	I1211 00:21:27.642269   45025 logs.go:282] 0 containers: []
	W1211 00:21:27.642276   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:27.642283   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:27.642294   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:27.653306   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:27.653326   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:27.716428   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:27.708615   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.709027   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710547   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.710862   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:27.712404   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:27.716438   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:27.716455   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:27.783513   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:27.783533   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:27.814010   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:27.814025   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:30.382748   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:30.393371   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:30.393432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:30.427609   45025 cri.go:89] found id: ""
	I1211 00:21:30.427623   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.427629   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:30.427635   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:30.427696   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:30.457893   45025 cri.go:89] found id: ""
	I1211 00:21:30.457907   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.457913   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:30.457918   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:30.457980   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:30.492222   45025 cri.go:89] found id: ""
	I1211 00:21:30.492234   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.492241   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:30.492246   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:30.492303   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:30.521511   45025 cri.go:89] found id: ""
	I1211 00:21:30.521525   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.521532   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:30.521537   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:30.521597   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:30.547821   45025 cri.go:89] found id: ""
	I1211 00:21:30.547835   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.547842   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:30.547847   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:30.547906   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:30.572652   45025 cri.go:89] found id: ""
	I1211 00:21:30.572666   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.572675   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:30.572681   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:30.572737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:30.601878   45025 cri.go:89] found id: ""
	I1211 00:21:30.601906   45025 logs.go:282] 0 containers: []
	W1211 00:21:30.601914   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:30.601921   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:30.601932   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:30.613084   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:30.613100   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:30.683127   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:30.672316   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.675910   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.676945   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.677588   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:30.679226   16281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:30.683136   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:30.683146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:30.750689   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:30.750707   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:30.784168   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:30.784183   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.353720   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:33.363733   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:33.363790   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:33.391890   45025 cri.go:89] found id: ""
	I1211 00:21:33.391904   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.391911   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:33.391917   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:33.391984   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:33.423803   45025 cri.go:89] found id: ""
	I1211 00:21:33.423816   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.423823   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:33.423828   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:33.423889   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:33.458122   45025 cri.go:89] found id: ""
	I1211 00:21:33.458135   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.458142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:33.458147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:33.458206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:33.485705   45025 cri.go:89] found id: ""
	I1211 00:21:33.485718   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.485725   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:33.485730   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:33.485786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:33.513596   45025 cri.go:89] found id: ""
	I1211 00:21:33.513609   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.513617   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:33.513622   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:33.513681   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:33.539390   45025 cri.go:89] found id: ""
	I1211 00:21:33.539403   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.539412   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:33.539418   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:33.539474   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:33.564837   45025 cri.go:89] found id: ""
	I1211 00:21:33.564849   45025 logs.go:282] 0 containers: []
	W1211 00:21:33.564856   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:33.564863   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:33.564873   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:33.629883   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:33.629902   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:33.641102   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:33.641118   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:33.708725   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:33.698945   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.700452   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.701454   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703251   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:33.703822   16387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:33.708736   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:33.708746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:33.777920   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:33.777939   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.306840   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:36.318198   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:36.318256   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:36.347923   45025 cri.go:89] found id: ""
	I1211 00:21:36.347936   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.347943   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:36.347948   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:36.348003   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:36.372908   45025 cri.go:89] found id: ""
	I1211 00:21:36.372921   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.372928   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:36.372934   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:36.372994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:36.398449   45025 cri.go:89] found id: ""
	I1211 00:21:36.398462   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.398470   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:36.398478   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:36.398533   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:36.438503   45025 cri.go:89] found id: ""
	I1211 00:21:36.438516   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.438523   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:36.438528   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:36.438585   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:36.468232   45025 cri.go:89] found id: ""
	I1211 00:21:36.468245   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.468253   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:36.468257   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:36.468318   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:36.494076   45025 cri.go:89] found id: ""
	I1211 00:21:36.494089   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.494096   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:36.494101   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:36.494168   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:36.521654   45025 cri.go:89] found id: ""
	I1211 00:21:36.521668   45025 logs.go:282] 0 containers: []
	W1211 00:21:36.521676   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:36.521689   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:36.521700   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:36.590822   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:36.590840   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:36.620876   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:36.620891   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:36.689379   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:36.689396   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:36.700340   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:36.700355   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:36.768766   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:36.760807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.761202   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.762807   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.763393   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:36.764923   16506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:39.270429   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:39.280501   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:39.280558   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:39.308182   45025 cri.go:89] found id: ""
	I1211 00:21:39.308203   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.308212   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:39.308218   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:39.308278   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:39.334096   45025 cri.go:89] found id: ""
	I1211 00:21:39.334113   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.334123   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:39.334132   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:39.334203   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:39.360088   45025 cri.go:89] found id: ""
	I1211 00:21:39.360101   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.360108   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:39.360115   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:39.360174   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:39.386315   45025 cri.go:89] found id: ""
	I1211 00:21:39.386328   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.386336   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:39.386341   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:39.386399   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:39.418994   45025 cri.go:89] found id: ""
	I1211 00:21:39.419008   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.419015   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:39.419020   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:39.419081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:39.446027   45025 cri.go:89] found id: ""
	I1211 00:21:39.446040   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.446047   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:39.446052   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:39.446119   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:39.474854   45025 cri.go:89] found id: ""
	I1211 00:21:39.474867   45025 logs.go:282] 0 containers: []
	W1211 00:21:39.474880   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:39.474888   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:39.474898   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:39.548615   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:39.548635   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:39.577039   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:39.577058   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:39.643644   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:39.643662   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:39.654782   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:39.654797   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:39.721483   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:39.713210   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.713993   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715477   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.715973   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:39.717443   16611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.221753   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:42.234138   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:42.234209   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:42.265615   45025 cri.go:89] found id: ""
	I1211 00:21:42.265631   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.265639   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:42.265645   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:42.265716   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:42.295341   45025 cri.go:89] found id: ""
	I1211 00:21:42.295357   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.295365   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:42.295371   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:42.295432   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:42.324010   45025 cri.go:89] found id: ""
	I1211 00:21:42.324025   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.324032   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:42.324039   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:42.324101   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:42.355998   45025 cri.go:89] found id: ""
	I1211 00:21:42.356012   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.356020   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:42.356025   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:42.356087   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:42.385254   45025 cri.go:89] found id: ""
	I1211 00:21:42.385267   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.385275   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:42.385279   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:42.385379   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:42.418942   45025 cri.go:89] found id: ""
	I1211 00:21:42.418956   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.418986   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:42.418993   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:42.419049   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:42.446484   45025 cri.go:89] found id: ""
	I1211 00:21:42.446497   45025 logs.go:282] 0 containers: []
	W1211 00:21:42.446504   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:42.446511   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:42.446522   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:42.521774   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:42.521792   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:42.533107   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:42.533124   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:42.601857   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:42.592994   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.593697   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.595426   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.596493   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:42.597357   16702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:42.601867   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:42.601877   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:42.670754   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:42.670773   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.205036   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:45.223242   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:45.223325   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:45.290545   45025 cri.go:89] found id: ""
	I1211 00:21:45.290560   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.290567   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:45.290580   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:45.290653   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:45.321549   45025 cri.go:89] found id: ""
	I1211 00:21:45.321562   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.321581   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:45.321587   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:45.321660   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:45.351332   45025 cri.go:89] found id: ""
	I1211 00:21:45.351345   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.351353   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:45.351358   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:45.351418   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:45.377195   45025 cri.go:89] found id: ""
	I1211 00:21:45.377208   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.377215   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:45.377221   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:45.377284   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:45.414830   45025 cri.go:89] found id: ""
	I1211 00:21:45.414844   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.414852   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:45.414857   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:45.414922   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:45.444982   45025 cri.go:89] found id: ""
	I1211 00:21:45.444996   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.445003   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:45.445008   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:45.445065   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:45.475344   45025 cri.go:89] found id: ""
	I1211 00:21:45.475358   45025 logs.go:282] 0 containers: []
	W1211 00:21:45.475365   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:45.475372   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:45.475388   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:45.544982   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:45.545000   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:45.578028   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:45.578044   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:45.650334   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:45.650360   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:45.661530   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:45.661547   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:45.726146   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:45.717745   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.718451   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720142   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.720598   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:45.722203   16821 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.226425   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:48.236595   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:48.236655   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:48.264517   45025 cri.go:89] found id: ""
	I1211 00:21:48.264531   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.264538   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:48.264544   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:48.264602   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:48.291335   45025 cri.go:89] found id: ""
	I1211 00:21:48.291349   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.291356   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:48.291361   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:48.291420   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:48.317975   45025 cri.go:89] found id: ""
	I1211 00:21:48.317996   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.318005   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:48.318010   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:48.318090   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:48.343743   45025 cri.go:89] found id: ""
	I1211 00:21:48.343757   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.343764   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:48.343769   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:48.343839   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:48.370548   45025 cri.go:89] found id: ""
	I1211 00:21:48.370561   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.370568   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:48.370573   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:48.370633   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:48.398956   45025 cri.go:89] found id: ""
	I1211 00:21:48.398991   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.398999   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:48.399004   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:48.399081   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:48.432879   45025 cri.go:89] found id: ""
	I1211 00:21:48.432892   45025 logs.go:282] 0 containers: []
	W1211 00:21:48.432900   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:48.432908   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:48.432918   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:48.514612   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:48.514631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:48.526574   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:48.526589   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:48.594430   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:48.586398   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.587036   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588504   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.588955   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:48.590438   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:48.594439   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:48.594449   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:48.662467   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:48.662487   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:51.193260   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:51.203850   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:51.203909   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:51.229218   45025 cri.go:89] found id: ""
	I1211 00:21:51.229232   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.229240   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:51.229249   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:51.229307   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:51.255535   45025 cri.go:89] found id: ""
	I1211 00:21:51.255549   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.255556   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:51.255561   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:51.255617   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:51.281281   45025 cri.go:89] found id: ""
	I1211 00:21:51.281295   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.281302   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:51.281306   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:51.281366   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:51.305242   45025 cri.go:89] found id: ""
	I1211 00:21:51.305256   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.305263   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:51.305268   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:51.305324   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:51.330682   45025 cri.go:89] found id: ""
	I1211 00:21:51.330695   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.330712   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:51.330717   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:51.330786   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:51.361324   45025 cri.go:89] found id: ""
	I1211 00:21:51.361338   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.361345   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:51.361351   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:51.361410   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:51.387177   45025 cri.go:89] found id: ""
	I1211 00:21:51.387191   45025 logs.go:282] 0 containers: []
	W1211 00:21:51.387198   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:51.387205   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:51.387216   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:51.461910   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:51.461927   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:51.473746   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:51.473761   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:51.542962   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:51.534067   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.534641   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.536484   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.537316   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:51.539159   17021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:51.542994   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:51.543008   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:51.611981   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:51.612003   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:54.140885   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:54.151154   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:54.151216   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:54.177398   45025 cri.go:89] found id: ""
	I1211 00:21:54.177412   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.177419   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:54.177424   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:54.177483   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:54.202665   45025 cri.go:89] found id: ""
	I1211 00:21:54.202679   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.202686   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:54.202691   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:54.202751   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:54.228121   45025 cri.go:89] found id: ""
	I1211 00:21:54.228135   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.228142   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:54.228147   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:54.228206   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:54.254699   45025 cri.go:89] found id: ""
	I1211 00:21:54.254713   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.254726   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:54.254732   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:54.254794   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:54.280912   45025 cri.go:89] found id: ""
	I1211 00:21:54.280926   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.280934   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:54.280939   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:54.281000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:54.309917   45025 cri.go:89] found id: ""
	I1211 00:21:54.309930   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.309937   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:54.309943   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:54.310000   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:54.335081   45025 cri.go:89] found id: ""
	I1211 00:21:54.335094   45025 logs.go:282] 0 containers: []
	W1211 00:21:54.335102   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:54.335110   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:54.335120   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:21:54.402799   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:54.402819   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:54.423966   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:54.423982   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:54.493676   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:54.484501   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.485237   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487135   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.487726   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:54.489654   17129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:54.493685   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:54.493695   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:54.562184   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:54.562202   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.095145   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:21:57.105735   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:21:57.105793   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:21:57.137586   45025 cri.go:89] found id: ""
	I1211 00:21:57.137600   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.137607   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:21:57.137612   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:21:57.137669   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:21:57.162960   45025 cri.go:89] found id: ""
	I1211 00:21:57.162997   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.163004   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:21:57.163009   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:21:57.163068   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:21:57.189960   45025 cri.go:89] found id: ""
	I1211 00:21:57.189982   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.189989   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:21:57.189994   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:21:57.190059   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:21:57.215046   45025 cri.go:89] found id: ""
	I1211 00:21:57.215059   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.215067   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:21:57.215072   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:21:57.215129   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:21:57.239646   45025 cri.go:89] found id: ""
	I1211 00:21:57.239659   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.239678   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:21:57.239682   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:21:57.239737   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:21:57.264818   45025 cri.go:89] found id: ""
	I1211 00:21:57.264832   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.264839   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:21:57.264844   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:21:57.264913   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:21:57.290063   45025 cri.go:89] found id: ""
	I1211 00:21:57.290076   45025 logs.go:282] 0 containers: []
	W1211 00:21:57.290083   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:21:57.290090   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:21:57.290103   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:21:57.300820   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:21:57.300834   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:21:57.366226   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:21:57.357983   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.358683   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360197   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.360722   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:21:57.362161   17228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:21:57.366236   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:21:57.366246   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:21:57.435439   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:21:57.435458   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:21:57.464292   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:21:57.464311   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.034825   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:00.107263   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:22:00.107592   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:22:00.209037   45025 cri.go:89] found id: ""
	I1211 00:22:00.209052   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.209060   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:22:00.209065   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:22:00.209139   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:22:00.259397   45025 cri.go:89] found id: ""
	I1211 00:22:00.259413   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.259420   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:22:00.259426   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:22:00.259499   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:22:00.300996   45025 cri.go:89] found id: ""
	I1211 00:22:00.301011   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.301020   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:22:00.301026   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:22:00.301121   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:22:00.355749   45025 cri.go:89] found id: ""
	I1211 00:22:00.355766   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.355775   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:22:00.355782   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:22:00.355863   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:22:00.397265   45025 cri.go:89] found id: ""
	I1211 00:22:00.397279   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.397287   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:22:00.397292   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:22:00.397357   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:22:00.431985   45025 cri.go:89] found id: ""
	I1211 00:22:00.432000   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.432008   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:22:00.432014   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:22:00.432079   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:22:00.475122   45025 cri.go:89] found id: ""
	I1211 00:22:00.475138   45025 logs.go:282] 0 containers: []
	W1211 00:22:00.475145   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:22:00.475154   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:22:00.475165   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 00:22:00.544019   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:22:00.544039   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:22:00.556109   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:22:00.556126   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:22:00.625124   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:22:00.616622   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.617477   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619049   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.619593   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:22:00.621252   17341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:22:00.625135   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:22:00.625146   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:22:00.693368   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:22:00.693387   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:22:03.226119   45025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:22:03.236558   45025 kubeadm.go:602] duration metric: took 4m3.502420888s to restartPrimaryControlPlane
	W1211 00:22:03.236621   45025 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 00:22:03.236698   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:22:03.653513   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:22:03.666451   45025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 00:22:03.674394   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:22:03.674497   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:22:03.682496   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:22:03.682506   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:22:03.682556   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:22:03.690253   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:22:03.690312   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:22:03.697814   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:22:03.705532   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:22:03.705584   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:22:03.712909   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.720642   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:22:03.720704   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:22:03.728085   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:22:03.735639   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:22:03.735694   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:22:03.743458   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:22:03.864690   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:22:03.865125   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:22:03.931571   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:26:05.371070   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1211 00:26:05.371093   45025 kubeadm.go:319] 
	I1211 00:26:05.371179   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:26:05.375684   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.375734   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:05.375839   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:05.375903   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:05.375950   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:05.375995   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:05.376042   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:05.376088   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:05.376135   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:05.376181   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:05.376229   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:05.376273   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:05.376319   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:05.376364   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:05.376435   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:05.376530   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:05.376618   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:05.376680   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:05.379737   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:05.379839   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:05.379918   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:05.380012   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:05.380083   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:05.380156   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:05.380207   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:05.380283   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:05.380352   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:05.380433   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:05.380508   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:05.380558   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:05.380610   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:05.380656   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:05.380709   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:05.380759   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:05.380821   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:05.380871   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:05.380957   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:05.381029   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:05.383945   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:05.384057   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:05.384159   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:05.384228   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:05.384331   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:05.384422   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:05.384548   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:05.384657   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:05.384704   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:05.384857   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:05.384973   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:26:05.385047   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001182146s
	I1211 00:26:05.385051   45025 kubeadm.go:319] 
	I1211 00:26:05.385122   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:26:05.385153   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:26:05.385275   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:26:05.385279   45025 kubeadm.go:319] 
	I1211 00:26:05.385390   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:26:05.385422   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:26:05.385452   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:26:05.385461   45025 kubeadm.go:319] 
	W1211 00:26:05.385565   45025 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182146s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1211 00:26:05.385656   45025 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 00:26:05.805014   45025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:26:05.817222   45025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 00:26:05.817275   45025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 00:26:05.825148   45025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 00:26:05.825157   45025 kubeadm.go:158] found existing configuration files:
	
	I1211 00:26:05.825207   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1211 00:26:05.832932   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 00:26:05.832991   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 00:26:05.840249   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1211 00:26:05.848087   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 00:26:05.848149   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 00:26:05.855944   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.863906   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 00:26:05.863960   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 00:26:05.871464   45025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1211 00:26:05.879062   45025 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 00:26:05.879116   45025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 00:26:05.886444   45025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 00:26:05.923722   45025 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 00:26:05.924046   45025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 00:26:06.002092   45025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 00:26:06.002152   45025 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 00:26:06.002191   45025 kubeadm.go:319] OS: Linux
	I1211 00:26:06.002233   45025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 00:26:06.002283   45025 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 00:26:06.002332   45025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 00:26:06.002377   45025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 00:26:06.002429   45025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 00:26:06.002486   45025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 00:26:06.002528   45025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 00:26:06.002578   45025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 00:26:06.002626   45025 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 00:26:06.076323   45025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 00:26:06.076462   45025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 00:26:06.076570   45025 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 00:26:06.087446   45025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 00:26:06.092847   45025 out.go:252]   - Generating certificates and keys ...
	I1211 00:26:06.092964   45025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 00:26:06.093051   45025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 00:26:06.093134   45025 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 00:26:06.093195   45025 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 00:26:06.093273   45025 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 00:26:06.093327   45025 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 00:26:06.093390   45025 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 00:26:06.093452   45025 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 00:26:06.093529   45025 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 00:26:06.093602   45025 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 00:26:06.093639   45025 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 00:26:06.093696   45025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 00:26:06.504239   45025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 00:26:06.701840   45025 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 00:26:07.114481   45025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 00:26:07.226723   45025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 00:26:07.349377   45025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 00:26:07.350330   45025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 00:26:07.353007   45025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 00:26:07.356354   45025 out.go:252]   - Booting up control plane ...
	I1211 00:26:07.356511   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 00:26:07.356601   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 00:26:07.356672   45025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 00:26:07.373379   45025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 00:26:07.373693   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 00:26:07.381535   45025 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 00:26:07.381916   45025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 00:26:07.382096   45025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 00:26:07.509380   45025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 00:26:07.509514   45025 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 00:30:07.509220   45025 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00004985s
	I1211 00:30:07.509346   45025 kubeadm.go:319] 
	I1211 00:30:07.509429   45025 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 00:30:07.509464   45025 kubeadm.go:319] 	- The kubelet is not running
	I1211 00:30:07.509569   45025 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 00:30:07.509574   45025 kubeadm.go:319] 
	I1211 00:30:07.509677   45025 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 00:30:07.509708   45025 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 00:30:07.509737   45025 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 00:30:07.509740   45025 kubeadm.go:319] 
	I1211 00:30:07.513952   45025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 00:30:07.514370   45025 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 00:30:07.514477   45025 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 00:30:07.514741   45025 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1211 00:30:07.514745   45025 kubeadm.go:319] 
	I1211 00:30:07.514828   45025 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 00:30:07.514885   45025 kubeadm.go:403] duration metric: took 12m7.817411267s to StartCluster
	I1211 00:30:07.514914   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 00:30:07.514994   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 00:30:07.541269   45025 cri.go:89] found id: ""
	I1211 00:30:07.541283   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.541291   45025 logs.go:284] No container was found matching "kube-apiserver"
	I1211 00:30:07.541299   45025 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 00:30:07.541373   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 00:30:07.568371   45025 cri.go:89] found id: ""
	I1211 00:30:07.568385   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.568392   45025 logs.go:284] No container was found matching "etcd"
	I1211 00:30:07.568397   45025 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 00:30:07.568452   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 00:30:07.593463   45025 cri.go:89] found id: ""
	I1211 00:30:07.593477   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.593484   45025 logs.go:284] No container was found matching "coredns"
	I1211 00:30:07.593489   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 00:30:07.593551   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 00:30:07.617718   45025 cri.go:89] found id: ""
	I1211 00:30:07.617732   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.617739   45025 logs.go:284] No container was found matching "kube-scheduler"
	I1211 00:30:07.617746   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 00:30:07.617801   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 00:30:07.644176   45025 cri.go:89] found id: ""
	I1211 00:30:07.644190   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.644197   45025 logs.go:284] No container was found matching "kube-proxy"
	I1211 00:30:07.644202   45025 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 00:30:07.644260   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 00:30:07.673956   45025 cri.go:89] found id: ""
	I1211 00:30:07.673970   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.673977   45025 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 00:30:07.673982   45025 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 00:30:07.674040   45025 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 00:30:07.699591   45025 cri.go:89] found id: ""
	I1211 00:30:07.699605   45025 logs.go:282] 0 containers: []
	W1211 00:30:07.699612   45025 logs.go:284] No container was found matching "kindnet"
	I1211 00:30:07.699619   45025 logs.go:123] Gathering logs for dmesg ...
	I1211 00:30:07.699631   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 00:30:07.710731   45025 logs.go:123] Gathering logs for describe nodes ...
	I1211 00:30:07.710746   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 00:30:07.782904   45025 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1211 00:30:07.773914   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.774740   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.776540   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.777187   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:30:07.779006   21130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 00:30:07.782915   45025 logs.go:123] Gathering logs for CRI-O ...
	I1211 00:30:07.782925   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 00:30:07.853292   45025 logs.go:123] Gathering logs for container status ...
	I1211 00:30:07.853310   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 00:30:07.882071   45025 logs.go:123] Gathering logs for kubelet ...
	I1211 00:30:07.882089   45025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1211 00:30:07.951740   45025 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1211 00:30:07.951780   45025 out.go:285] * 
	W1211 00:30:07.951888   45025 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.951950   45025 out.go:285] * 
	W1211 00:30:07.954090   45025 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 00:30:07.959721   45025 out.go:203] 
	W1211 00:30:07.962947   45025 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00004985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 00:30:07.963287   45025 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1211 00:30:07.963357   45025 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1211 00:30:07.966374   45025 out.go:203] 
	
	
	==> CRI-O <==
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559941409Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559978333Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560024989Z" level=info msg="Create NRI interface"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560126324Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560135908Z" level=info msg="runtime interface created"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560147707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560154386Z" level=info msg="runtime interface starting up..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560161025Z" level=info msg="starting plugins..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560173825Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560247985Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:17:58 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.935283532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6de6e87e-5991-43bc-b331-3c4da3939cd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936110736Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ed5fc17-8833-4a00-b49a-175298f161c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936663858Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5386bae8-3763-43a3-8e84-b7f98f5b64ad name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937146602Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bb2a1f8a-e043-498b-9aaf-3f590536bef8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937597116Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=46623a2c-7e86-46f1-9f50-faf880a0f7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938029611Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aacc6a08-88ba-4e77-9e82-199f5f521e79 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938428834Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1070a90a-4ba9-466d-bf22-501c564282df name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.079523143Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a3e30ef-9eb4-44b7-80b3-789735758754 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080212934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1702c38-afbc-48d3-aaa7-dbad7d98554e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080781133Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4a323cb1-ab88-481e-9cee-f539f47c462d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.081259674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11defec5-3e05-48c6-9020-9fbe1396c100 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08179049Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=befd3141-5ed6-4610-bc01-9a813a131605 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08229226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=882be104-d73f-4553-a30a-8e88aacff392 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.082743281Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c95dd536-8fa6-4a4b-9d2c-8647b294d5c0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:31:57.852836   22593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:31:57.853344   22593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:31:57.854897   22593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:31:57.855256   22593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:31:57.856679   22593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:31:57 up 43 min,  0 user,  load average: 0.40, 0.35, 0.42
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:31:55 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:31:55 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1105.
	Dec 11 00:31:55 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:31:55 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:31:55 functional-786978 kubelet[22480]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:31:55 functional-786978 kubelet[22480]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:31:55 functional-786978 kubelet[22480]: E1211 00:31:55.955874   22480 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:31:55 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:31:55 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:31:56 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1106.
	Dec 11 00:31:56 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:31:56 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:31:56 functional-786978 kubelet[22486]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:31:56 functional-786978 kubelet[22486]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:31:56 functional-786978 kubelet[22486]: E1211 00:31:56.724086   22486 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:31:56 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:31:56 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:31:57 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1107.
	Dec 11 00:31:57 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:31:57 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:31:57 functional-786978 kubelet[22507]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:31:57 functional-786978 kubelet[22507]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:31:57 functional-786978 kubelet[22507]: E1211 00:31:57.471161   22507 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:31:57 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:31:57 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (382.597723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1211 00:30:26.070878    4875 retry.go:31] will retry after 1.52814952s: Temporary Error: Get "http://10.108.110.71": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1211 00:30:37.599759    4875 retry.go:31] will retry after 4.497808969s: Temporary Error: Get "http://10.108.110.71": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1211 00:30:52.099808    4875 retry.go:31] will retry after 5.46052708s: Temporary Error: Get "http://10.108.110.71": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1211 00:31:07.561747    4875 retry.go:31] will retry after 6.841714369s: Temporary Error: Get "http://10.108.110.71": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1211 00:31:24.404151    4875 retry.go:31] will retry after 21.720875515s: Temporary Error: Get "http://10.108.110.71": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1211 00:33:12.725449    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (309.465833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (340.734227ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-786978 image load --daemon kicbase/echo-server:functional-786978 --alsologtostderr                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls                                                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image save kicbase/echo-server:functional-786978 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image rm kicbase/echo-server:functional-786978 --alsologtostderr                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls                                                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls                                                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image save --daemon kicbase/echo-server:functional-786978 --alsologtostderr                                                             │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh sudo cat /etc/test/nested/copy/4875/hosts                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh sudo cat /etc/ssl/certs/4875.pem                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh sudo cat /usr/share/ca-certificates/4875.pem                                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh sudo cat /etc/ssl/certs/48752.pem                                                                                                   │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh sudo cat /usr/share/ca-certificates/48752.pem                                                                                       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls --format short --alsologtostderr                                                                                               │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ update-context │ functional-786978 update-context --alsologtostderr -v=2                                                                                                   │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh            │ functional-786978 ssh pgrep buildkitd                                                                                                                     │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ image          │ functional-786978 image build -t localhost/my-image:functional-786978 testdata/build --alsologtostderr                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls                                                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls --format yaml --alsologtostderr                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls --format json --alsologtostderr                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ image          │ functional-786978 image ls --format table --alsologtostderr                                                                                               │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ update-context │ functional-786978 update-context --alsologtostderr -v=2                                                                                                   │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ update-context │ functional-786978 update-context --alsologtostderr -v=2                                                                                                   │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:32:13
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:32:13.818690   62323 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:32:13.818874   62323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.818903   62323 out.go:374] Setting ErrFile to fd 2...
	I1211 00:32:13.818924   62323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.819236   62323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:32:13.819622   62323 out.go:368] Setting JSON to false
	I1211 00:32:13.820496   62323 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2620,"bootTime":1765410514,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:32:13.820586   62323 start.go:143] virtualization:  
	I1211 00:32:13.823923   62323 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:32:13.826856   62323 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:32:13.826920   62323 notify.go:221] Checking for updates...
	I1211 00:32:13.832561   62323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:32:13.835419   62323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:32:13.838258   62323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:32:13.841065   62323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:32:13.843962   62323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:32:13.847347   62323 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:32:13.847919   62323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:32:13.879826   62323 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:32:13.879943   62323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:13.937210   62323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.928080673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:13.937315   62323 docker.go:319] overlay module found
	I1211 00:32:13.940499   62323 out.go:179] * Using the docker driver based on existing profile
	I1211 00:32:13.943472   62323 start.go:309] selected driver: docker
	I1211 00:32:13.943512   62323 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:13.943621   62323 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:32:13.943727   62323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:14.007488   62323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.999061697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:14.007934   62323 cni.go:84] Creating CNI manager for ""
	I1211 00:32:14.008001   62323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:32:14.008045   62323 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:14.012782   62323 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.079523143Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a3e30ef-9eb4-44b7-80b3-789735758754 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080212934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1702c38-afbc-48d3-aaa7-dbad7d98554e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080781133Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4a323cb1-ab88-481e-9cee-f539f47c462d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.081259674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11defec5-3e05-48c6-9020-9fbe1396c100 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08179049Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=befd3141-5ed6-4610-bc01-9a813a131605 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08229226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=882be104-d73f-4553-a30a-8e88aacff392 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.082743281Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c95dd536-8fa6-4a4b-9d2c-8647b294d5c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.907161732Z" level=info msg="Checking image status: kicbase/echo-server:functional-786978" id=148b8f3c-8c8e-4a8b-9ac2-f7c18c685919 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.907337882Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.907379064Z" level=info msg="Image kicbase/echo-server:functional-786978 not found" id=148b8f3c-8c8e-4a8b-9ac2-f7c18c685919 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.907439135Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-786978 found" id=148b8f3c-8c8e-4a8b-9ac2-f7c18c685919 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.933204942Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-786978" id=13e48f8d-8e9f-4505-a7ce-edc46b64b8e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.933344259Z" level=info msg="Image docker.io/kicbase/echo-server:functional-786978 not found" id=13e48f8d-8e9f-4505-a7ce-edc46b64b8e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.933383447Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-786978 found" id=13e48f8d-8e9f-4505-a7ce-edc46b64b8e4 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.961590173Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-786978" id=b9a70733-7093-420f-b864-b7dac47a4208 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.961741551Z" level=info msg="Image localhost/kicbase/echo-server:functional-786978 not found" id=b9a70733-7093-420f-b864-b7dac47a4208 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:18 functional-786978 crio[9951]: time="2025-12-11T00:32:18.961780953Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-786978 found" id=b9a70733-7093-420f-b864-b7dac47a4208 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.012522708Z" level=info msg="Checking image status: kicbase/echo-server:functional-786978" id=59e4056c-7d7d-4e9c-9c68-6f9c85d5001c name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.012717312Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.01276144Z" level=info msg="Image kicbase/echo-server:functional-786978 not found" id=59e4056c-7d7d-4e9c-9c68-6f9c85d5001c name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.012842327Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-786978 found" id=59e4056c-7d7d-4e9c-9c68-6f9c85d5001c name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.042348683Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-786978" id=88324951-7643-45d5-93d4-cfe1112f4947 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.042523963Z" level=info msg="Image docker.io/kicbase/echo-server:functional-786978 not found" id=88324951-7643-45d5-93d4-cfe1112f4947 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.042582999Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-786978 found" id=88324951-7643-45d5-93d4-cfe1112f4947 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:32:22 functional-786978 crio[9951]: time="2025-12-11T00:32:22.06904019Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-786978" id=cea48a9c-776a-4980-9b5c-9c07b3af7cac name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:34:17.818003   25401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:34:17.818520   25401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:34:17.820213   25401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:34:17.820840   25401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:34:17.821852   25401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:34:17 up 45 min,  0 user,  load average: 0.24, 0.39, 0.43
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:34:14 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:34:15 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1291.
	Dec 11 00:34:15 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:34:15 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:34:15 functional-786978 kubelet[25277]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:34:15 functional-786978 kubelet[25277]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:34:15 functional-786978 kubelet[25277]: E1211 00:34:15.699414   25277 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:34:15 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:34:15 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:34:16 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 11 00:34:16 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:34:16 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:34:16 functional-786978 kubelet[25282]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:34:16 functional-786978 kubelet[25282]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:34:16 functional-786978 kubelet[25282]: E1211 00:34:16.447738   25282 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:34:16 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:34:16 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:34:17 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 11 00:34:17 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:34:17 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:34:17 functional-786978 kubelet[25315]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:34:17 functional-786978 kubelet[25315]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:34:17 functional-786978 kubelet[25315]: E1211 00:34:17.189307   25315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:34:17 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:34:17 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (339.38322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-786978 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-786978 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (62.967447ms)

                                                
                                                
** stderr ** 
	E1211 00:32:15.821506   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.823115   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.824562   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.826033   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.827458   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-786978 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1211 00:32:15.821506   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.823115   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.824562   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.826033   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.827458   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1211 00:32:15.821506   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.823115   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.824562   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.826033   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.827458   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1211 00:32:15.821506   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.823115   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.824562   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.826033   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.827458   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1211 00:32:15.821506   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.823115   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.824562   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.826033   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.827458   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1211 00:32:15.821506   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.823115   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.824562   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.826033   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:32:15.827458   62753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-786978
helpers_test.go:244: (dbg) docker inspect functional-786978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	        "Created": "2025-12-11T00:03:15.146383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T00:03:15.209186613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/hosts",
	        "LogPath": "/var/lib/docker/containers/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634/a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634-json.log",
	        "Name": "/functional-786978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-786978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-786978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4edbfef17d046c803948a391aa5c034cbe71e64e7412d0bc90fee711d1f3634",
	                "LowerDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2e6e6d7199108ae73f6fb21e5600d4bf19805f2e168a9e3fcfd48c4645b35cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-786978",
	                "Source": "/var/lib/docker/volumes/functional-786978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-786978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-786978",
	                "name.minikube.sigs.k8s.io": "functional-786978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c21e9504fdd35128eb7c9d9678bcaec4c606f4dbb375eccc7850f05cbdd09c",
	            "SandboxKey": "/var/run/docker/netns/58c21e9504fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-786978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:ba:0c:95:93:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92f6141e20d1c3180afb0135982164bf439cc1ecb135ca62d30199e68fba6e91",
	                    "EndpointID": "1fa1b58e5f8b2a6dea2ad5795771064d0fd4bb1015361b46240694ee71c4601b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-786978",
	                        "a4edbfef17d0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-786978 -n functional-786978: exit status 2 (305.242592ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-786978 service hello-node --url                                                                                                          │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001:/mount-9p --alsologtostderr -v=1              │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh -- ls -la /mount-9p                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh cat /mount-9p/test-1765413124133681473                                                                                        │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh sudo umount -f /mount-9p                                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1860978438/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh -- ls -la /mount-9p                                                                                                           │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh sudo umount -f /mount-9p                                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount1 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount2 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount3 --alsologtostderr -v=1                │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount1                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-786978 ssh findmnt -T /mount1                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh findmnt -T /mount2                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ ssh       │ functional-786978 ssh findmnt -T /mount3                                                                                                            │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │ 11 Dec 25 00:32 UTC │
	│ mount     │ -p functional-786978 --kill=true                                                                                                                    │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-786978 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-786978 --alsologtostderr -v=1                                                                                      │ functional-786978 │ jenkins │ v1.37.0 │ 11 Dec 25 00:32 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 00:32:13
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 00:32:13.818690   62323 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:32:13.818874   62323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.818903   62323 out.go:374] Setting ErrFile to fd 2...
	I1211 00:32:13.818924   62323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.819236   62323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:32:13.819622   62323 out.go:368] Setting JSON to false
	I1211 00:32:13.820496   62323 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2620,"bootTime":1765410514,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:32:13.820586   62323 start.go:143] virtualization:  
	I1211 00:32:13.823923   62323 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:32:13.826856   62323 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:32:13.826920   62323 notify.go:221] Checking for updates...
	I1211 00:32:13.832561   62323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:32:13.835419   62323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:32:13.838258   62323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:32:13.841065   62323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:32:13.843962   62323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:32:13.847347   62323 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:32:13.847919   62323 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:32:13.879826   62323 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:32:13.879943   62323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:13.937210   62323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.928080673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:13.937315   62323 docker.go:319] overlay module found
	I1211 00:32:13.940499   62323 out.go:179] * Using the docker driver based on existing profile
	I1211 00:32:13.943472   62323 start.go:309] selected driver: docker
	I1211 00:32:13.943512   62323 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:13.943621   62323 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:32:13.943727   62323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:14.007488   62323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.999061697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:14.007934   62323 cni.go:84] Creating CNI manager for ""
	I1211 00:32:14.008001   62323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 00:32:14.008045   62323 start.go:353] cluster config:
	{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:14.012782   62323 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559941409Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.559978333Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560024989Z" level=info msg="Create NRI interface"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560126324Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560135908Z" level=info msg="runtime interface created"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560147707Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560154386Z" level=info msg="runtime interface starting up..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560161025Z" level=info msg="starting plugins..."
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560173825Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 00:17:58 functional-786978 crio[9951]: time="2025-12-11T00:17:58.560247985Z" level=info msg="No systemd watchdog enabled"
	Dec 11 00:17:58 functional-786978 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.935283532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6de6e87e-5991-43bc-b331-3c4da3939cd8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936110736Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ed5fc17-8833-4a00-b49a-175298f161c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.936663858Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=5386bae8-3763-43a3-8e84-b7f98f5b64ad name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937146602Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=bb2a1f8a-e043-498b-9aaf-3f590536bef8 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.937597116Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=46623a2c-7e86-46f1-9f50-faf880a0f7a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938029611Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aacc6a08-88ba-4e77-9e82-199f5f521e79 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:22:03 functional-786978 crio[9951]: time="2025-12-11T00:22:03.938428834Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1070a90a-4ba9-466d-bf22-501c564282df name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.079523143Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1a3e30ef-9eb4-44b7-80b3-789735758754 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080212934Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a1702c38-afbc-48d3-aaa7-dbad7d98554e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.080781133Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4a323cb1-ab88-481e-9cee-f539f47c462d name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.081259674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11defec5-3e05-48c6-9020-9fbe1396c100 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08179049Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=befd3141-5ed6-4610-bc01-9a813a131605 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.08229226Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=882be104-d73f-4553-a30a-8e88aacff392 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 00:26:06 functional-786978 crio[9951]: time="2025-12-11T00:26:06.082743281Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=c95dd536-8fa6-4a4b-9d2c-8647b294d5c0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1211 00:32:16.749409   23596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:16.750209   23596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:16.751844   23596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:16.752153   23596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1211 00:32:16.753659   23596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 23:48] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014745] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.691199] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034171] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753043] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431836] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 23:53] overlayfs: idmapped layers are currently not supported
	[  +0.083383] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 23:58] overlayfs: idmapped layers are currently not supported
	[Dec10 23:59] overlayfs: idmapped layers are currently not supported
	[Dec11 00:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 00:32:16 up 43 min,  0 user,  load average: 0.78, 0.44, 0.45
	Linux functional-786978 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 11 00:32:14 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:14 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:14 functional-786978 kubelet[23385]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:14 functional-786978 kubelet[23385]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:14 functional-786978 kubelet[23385]: E1211 00:32:14.956953   23385 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:14 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:15 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 11 00:32:15 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:15 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:15 functional-786978 kubelet[23491]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:15 functional-786978 kubelet[23491]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:15 functional-786978 kubelet[23491]: E1211 00:32:15.735432   23491 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:15 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:15 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 00:32:16 functional-786978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 11 00:32:16 functional-786978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:16 functional-786978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 00:32:16 functional-786978 kubelet[23521]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:16 functional-786978 kubelet[23521]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 00:32:16 functional-786978 kubelet[23521]: E1211 00:32:16.464472   23521 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 00:32:16 functional-786978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 00:32:16 functional-786978 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-786978 -n functional-786978: exit status 2 (339.470925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-786978" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1211 00:30:15.541670   58063 out.go:360] Setting OutFile to fd 1 ...
I1211 00:30:15.542010   58063 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:30:15.542019   58063 out.go:374] Setting ErrFile to fd 2...
I1211 00:30:15.542024   58063 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:30:15.542291   58063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:30:15.542547   58063 mustload.go:66] Loading cluster: functional-786978
I1211 00:30:15.543430   58063 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:30:15.543943   58063 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:30:15.564798   58063 host.go:66] Checking if "functional-786978" exists ...
I1211 00:30:15.565118   58063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1211 00:30:15.682661   58063 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:30:15.673022624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1211 00:30:15.682796   58063 api_server.go:166] Checking apiserver status ...
I1211 00:30:15.682855   58063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1211 00:30:15.682897   58063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:30:15.714667   58063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
W1211 00:30:15.837720   58063 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1211 00:30:15.841068   58063 out.go:179] * The control-plane node functional-786978 apiserver is not running: (state=Stopped)
I1211 00:30:15.844106   58063 out.go:179]   To start a cluster, run: "minikube start -p functional-786978"

                                                
                                                
stdout: * The control-plane node functional-786978 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-786978"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 58064: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-786978 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-786978 apply -f testdata/testsvc.yaml: exit status 1 (108.887924ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-786978 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (100.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.108.110.71": Temporary Error: Get "http://10.108.110.71": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-786978 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-786978 get svc nginx-svc: exit status 1 (62.30358ms)

                                                
                                                
** stderr ** 
	E1211 00:31:56.179597   59151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.181166   59151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.182552   59151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.183982   59151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1211 00:31:56.185449   59151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-786978 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (100.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-786978 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-786978 create deployment hello-node --image kicbase/echo-server: exit status 1 (59.969496ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-786978 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 service list: exit status 103 (291.017023ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-786978 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-786978"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-786978 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-786978 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-786978\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 service list -o json: exit status 103 (258.326436ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-786978 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-786978"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-786978 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 service --namespace=default --https --url hello-node: exit status 103 (258.638801ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-786978 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-786978"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-786978 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 service hello-node --url --format={{.IP}}: exit status 103 (265.820203ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-786978 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-786978"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-786978 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-786978 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-786978\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 service hello-node --url: exit status 103 (538.306921ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-786978 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-786978"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-786978 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-786978 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-786978"
functional_test.go:1579: failed to parse "* The control-plane node functional-786978 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-786978\"": parse "* The control-plane node functional-786978 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-786978\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765413124133681473" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765413124133681473" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765413124133681473" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001/test-1765413124133681473
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.427647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 00:32:04.477385    4875 retry.go:31] will retry after 716.252991ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 11 00:32 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 11 00:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 11 00:32 test-1765413124133681473
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh cat /mount-9p/test-1765413124133681473
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-786978 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-786978 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.765991ms)

                                                
                                                
** stderr ** 
	E1211 00:32:06.086318   60717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-786978 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (277.250911ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=37077)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 11 00:32 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 11 00:32 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 11 00:32 test-1765413124133681473
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-786978 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:37077
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001:/mount-9p --alsologtostderr -v=1] stderr:
I1211 00:32:04.197383   60370 out.go:360] Setting OutFile to fd 1 ...
I1211 00:32:04.197542   60370 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:04.197563   60370 out.go:374] Setting ErrFile to fd 2...
I1211 00:32:04.197573   60370 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:04.197835   60370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:32:04.198094   60370 mustload.go:66] Loading cluster: functional-786978
I1211 00:32:04.198503   60370 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:04.199120   60370 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:32:04.217294   60370 host.go:66] Checking if "functional-786978" exists ...
I1211 00:32:04.217615   60370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1211 00:32:04.311155   60370 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-11 00:32:04.302070868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1211 00:32:04.311315   60370 cli_runner.go:164] Run: docker network inspect functional-786978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1211 00:32:04.333906   60370 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001 into VM as /mount-9p ...
I1211 00:32:04.336910   60370 out.go:179]   - Mount type:   9p
I1211 00:32:04.339957   60370 out.go:179]   - User ID:      docker
I1211 00:32:04.342849   60370 out.go:179]   - Group ID:     docker
I1211 00:32:04.345902   60370 out.go:179]   - Version:      9p2000.L
I1211 00:32:04.349041   60370 out.go:179]   - Message Size: 262144
I1211 00:32:04.352056   60370 out.go:179]   - Options:      map[]
I1211 00:32:04.354878   60370 out.go:179]   - Bind Address: 192.168.49.1:37077
I1211 00:32:04.357818   60370 out.go:179] * Userspace file server: 
I1211 00:32:04.358179   60370 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1211 00:32:04.358291   60370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:32:04.378730   60370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
I1211 00:32:04.493486   60370 mount.go:180] unmount for /mount-9p ran successfully
I1211 00:32:04.493516   60370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1211 00:32:04.502353   60370 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37077,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1211 00:32:04.513172   60370 main.go:127] stdlog: ufs.go:141 connected
I1211 00:32:04.513333   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tversion tag 65535 msize 262144 version '9P2000.L'
I1211 00:32:04.513373   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rversion tag 65535 msize 262144 version '9P2000'
I1211 00:32:04.513604   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1211 00:32:04.513660   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rattach tag 0 aqid (ed6cc0 ad27822 'd')
I1211 00:32:04.514471   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 0
I1211 00:32:04.514541   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cc0 ad27822 'd') m d775 at 0 mt 1765413124 l 4096 t 0 d 0 ext )
I1211 00:32:04.519259   60370 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/.mount-process: {Name:mk6b7e9e2dc31b8f95dee780d8916b14ad312a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1211 00:32:04.519470   60370 mount.go:105] mount successful: ""
I1211 00:32:04.522748   60370 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3153199012/001 to /mount-9p
I1211 00:32:04.525544   60370 out.go:203] 
I1211 00:32:04.528302   60370 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1211 00:32:05.731126   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 0
I1211 00:32:05.731204   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cc0 ad27822 'd') m d775 at 0 mt 1765413124 l 4096 t 0 d 0 ext )
I1211 00:32:05.731535   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 1 
I1211 00:32:05.731567   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 
I1211 00:32:05.731711   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Topen tag 0 fid 1 mode 0
I1211 00:32:05.731761   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Ropen tag 0 qid (ed6cc0 ad27822 'd') iounit 0
I1211 00:32:05.731905   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 0
I1211 00:32:05.731942   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cc0 ad27822 'd') m d775 at 0 mt 1765413124 l 4096 t 0 d 0 ext )
I1211 00:32:05.732099   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 0 count 262120
I1211 00:32:05.732206   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 258
I1211 00:32:05.732331   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 258 count 261862
I1211 00:32:05.732366   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:05.732495   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 258 count 262120
I1211 00:32:05.732520   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:05.732645   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1211 00:32:05.732676   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 (ed6cc1 ad27822 '') 
I1211 00:32:05.732813   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:05.732850   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cc1 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:05.732973   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:05.733002   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cc1 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:05.733132   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 2
I1211 00:32:05.733155   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:05.733314   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1211 00:32:05.733342   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 (ed6cc2 ad27822 '') 
I1211 00:32:05.733465   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:05.733504   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cc2 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:05.733629   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:05.733666   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cc2 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:05.733793   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 2
I1211 00:32:05.733816   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:05.733937   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 2 0:'test-1765413124133681473' 
I1211 00:32:05.733969   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 (ed6cc3 ad27822 '') 
I1211 00:32:05.734096   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:05.734129   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('test-1765413124133681473' 'jenkins' 'jenkins' '' q (ed6cc3 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:05.734242   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:05.734283   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('test-1765413124133681473' 'jenkins' 'jenkins' '' q (ed6cc3 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:05.734408   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 2
I1211 00:32:05.734428   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:05.734535   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 258 count 262120
I1211 00:32:05.734563   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:05.734682   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 1
I1211 00:32:05.734708   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.011382   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 1 0:'test-1765413124133681473' 
I1211 00:32:06.011451   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 (ed6cc3 ad27822 '') 
I1211 00:32:06.011623   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 1
I1211 00:32:06.011683   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('test-1765413124133681473' 'jenkins' 'jenkins' '' q (ed6cc3 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.011854   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 1 newfid 2 
I1211 00:32:06.011880   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 
I1211 00:32:06.012473   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Topen tag 0 fid 2 mode 0
I1211 00:32:06.012521   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Ropen tag 0 qid (ed6cc3 ad27822 '') iounit 0
I1211 00:32:06.012659   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 1
I1211 00:32:06.012696   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('test-1765413124133681473' 'jenkins' 'jenkins' '' q (ed6cc3 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.012838   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 2 offset 0 count 262120
I1211 00:32:06.012883   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 24
I1211 00:32:06.013011   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 2 offset 24 count 262120
I1211 00:32:06.013049   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:06.013178   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 2 offset 24 count 262120
I1211 00:32:06.013211   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:06.013443   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 2
I1211 00:32:06.013489   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.013660   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 1
I1211 00:32:06.013692   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.357672   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 0
I1211 00:32:06.357743   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cc0 ad27822 'd') m d775 at 0 mt 1765413124 l 4096 t 0 d 0 ext )
I1211 00:32:06.358093   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 1 
I1211 00:32:06.358128   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 
I1211 00:32:06.358247   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Topen tag 0 fid 1 mode 0
I1211 00:32:06.358293   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Ropen tag 0 qid (ed6cc0 ad27822 'd') iounit 0
I1211 00:32:06.358419   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 0
I1211 00:32:06.358451   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6cc0 ad27822 'd') m d775 at 0 mt 1765413124 l 4096 t 0 d 0 ext )
I1211 00:32:06.358590   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 0 count 262120
I1211 00:32:06.358701   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 258
I1211 00:32:06.358838   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 258 count 261862
I1211 00:32:06.358866   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:06.359011   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 258 count 262120
I1211 00:32:06.359040   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:06.359196   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1211 00:32:06.359228   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 (ed6cc1 ad27822 '') 
I1211 00:32:06.359336   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:06.359372   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cc1 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.359501   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:06.359533   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6cc1 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.359659   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 2
I1211 00:32:06.359682   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.359817   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1211 00:32:06.359855   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 (ed6cc2 ad27822 '') 
I1211 00:32:06.359963   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:06.359995   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cc2 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.360132   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:06.360167   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6cc2 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.360280   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 2
I1211 00:32:06.360302   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.360436   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 2 0:'test-1765413124133681473' 
I1211 00:32:06.360467   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rwalk tag 0 (ed6cc3 ad27822 '') 
I1211 00:32:06.360583   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:06.360612   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('test-1765413124133681473' 'jenkins' 'jenkins' '' q (ed6cc3 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.360743   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tstat tag 0 fid 2
I1211 00:32:06.360774   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rstat tag 0 st ('test-1765413124133681473' 'jenkins' 'jenkins' '' q (ed6cc3 ad27822 '') m 644 at 0 mt 1765413124 l 24 t 0 d 0 ext )
I1211 00:32:06.360910   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 2
I1211 00:32:06.360935   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.361057   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tread tag 0 fid 1 offset 258 count 262120
I1211 00:32:06.361080   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rread tag 0 count 0
I1211 00:32:06.361208   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 1
I1211 00:32:06.361233   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.362380   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1211 00:32:06.362454   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rerror tag 0 ename 'file not found' ecode 0
I1211 00:32:06.624139   60370 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:60154 Tclunk tag 0 fid 0
I1211 00:32:06.624192   60370 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:60154 Rclunk tag 0
I1211 00:32:06.625242   60370 main.go:127] stdlog: ufs.go:147 disconnected
I1211 00:32:06.648716   60370 out.go:179] * Unmounting /mount-9p ...
I1211 00:32:06.651794   60370 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1211 00:32:06.660088   60370 mount.go:180] unmount for /mount-9p ran successfully
I1211 00:32:06.660191   60370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/.mount-process: {Name:mk6b7e9e2dc31b8f95dee780d8916b14ad312a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1211 00:32:06.663336   60370 out.go:203] 
W1211 00:32:06.666293   60370 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1211 00:32:06.669083   60370 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.34s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-759096 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-759096 --output=json --user=testUser: exit status 80 (2.336194165s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6bfb35a0-7033-4c25-af44-61421ef2dc1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-759096 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6ab4ef78-876a-4bbf-b39b-b282e405c86c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-11T00:46:53Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"6e6fb5e3-bda0-4b88-a89d-bdc4cbc4f13a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-759096 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.34s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.31s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-759096 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-759096 --output=json --user=testUser: exit status 80 (2.311401908s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4addb841-d039-4e02-a96f-4a4ec44e1976","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-759096 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9db9adb7-8671-4dbf-928c-d342084466d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-11T00:46:55Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6f5db845-10d9-4810-8304-eb6b8cb1a284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-759096 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (796.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1211 01:05:09.648448    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:05:15.960525    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.159555395s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-174503
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-174503: (1.60898807s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-174503 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-174503 status --format={{.Host}}: exit status 7 (84.418354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m26.422785449s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-174503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-174503" primary control-plane node in "kubernetes-upgrade-174503" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 01:05:23.797784  181983 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:05:23.797974  181983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:05:23.798005  181983 out.go:374] Setting ErrFile to fd 2...
	I1211 01:05:23.798029  181983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:05:23.798448  181983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:05:23.798948  181983 out.go:368] Setting JSON to false
	I1211 01:05:23.800666  181983 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4610,"bootTime":1765410514,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 01:05:23.802703  181983 start.go:143] virtualization:  
	I1211 01:05:23.807029  181983 out.go:179] * [kubernetes-upgrade-174503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 01:05:23.810408  181983 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 01:05:23.810567  181983 notify.go:221] Checking for updates...
	I1211 01:05:23.816388  181983 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 01:05:23.819415  181983 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 01:05:23.822311  181983 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 01:05:23.825436  181983 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 01:05:23.828405  181983 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 01:05:23.832504  181983 config.go:182] Loaded profile config "kubernetes-upgrade-174503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1211 01:05:23.833078  181983 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 01:05:23.884883  181983 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 01:05:23.885015  181983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:05:23.973193  181983 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-11 01:05:23.9593684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:05:23.973324  181983 docker.go:319] overlay module found
	I1211 01:05:23.978832  181983 out.go:179] * Using the docker driver based on existing profile
	I1211 01:05:23.981739  181983 start.go:309] selected driver: docker
	I1211 01:05:23.981763  181983 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-174503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-174503 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:05:23.981868  181983 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 01:05:23.982554  181983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:05:24.046506  181983 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-11 01:05:24.036763819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:05:24.046848  181983 cni.go:84] Creating CNI manager for ""
	I1211 01:05:24.046917  181983 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:05:24.047040  181983 start.go:353] cluster config:
	{Name:kubernetes-upgrade-174503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-174503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:05:24.050363  181983 out.go:179] * Starting "kubernetes-upgrade-174503" primary control-plane node in "kubernetes-upgrade-174503" cluster
	I1211 01:05:24.055372  181983 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 01:05:24.058299  181983 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 01:05:24.061022  181983 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 01:05:24.061048  181983 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 01:05:24.061066  181983 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1211 01:05:24.061078  181983 cache.go:65] Caching tarball of preloaded images
	I1211 01:05:24.061156  181983 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 01:05:24.061167  181983 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1211 01:05:24.061277  181983 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/config.json ...
	I1211 01:05:24.081957  181983 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 01:05:24.081981  181983 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 01:05:24.081999  181983 cache.go:243] Successfully downloaded all kic artifacts
	I1211 01:05:24.082029  181983 start.go:360] acquireMachinesLock for kubernetes-upgrade-174503: {Name:mk48bc63ac3383f765787324a1a6daea2aea7d54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 01:05:24.082104  181983 start.go:364] duration metric: took 45.209µs to acquireMachinesLock for "kubernetes-upgrade-174503"
	I1211 01:05:24.082130  181983 start.go:96] Skipping create...Using existing machine configuration
	I1211 01:05:24.082139  181983 fix.go:54] fixHost starting: 
	I1211 01:05:24.082393  181983 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-174503 --format={{.State.Status}}
	I1211 01:05:24.101758  181983 fix.go:112] recreateIfNeeded on kubernetes-upgrade-174503: state=Stopped err=<nil>
	W1211 01:05:24.101806  181983 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 01:05:24.105036  181983 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-174503" ...
	I1211 01:05:24.105152  181983 cli_runner.go:164] Run: docker start kubernetes-upgrade-174503
	I1211 01:05:24.396375  181983 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-174503 --format={{.State.Status}}
	I1211 01:05:24.433885  181983 kic.go:430] container "kubernetes-upgrade-174503" state is running.
	I1211 01:05:24.434264  181983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-174503
	I1211 01:05:24.461762  181983 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/config.json ...
	I1211 01:05:24.462107  181983 machine.go:94] provisionDockerMachine start ...
	I1211 01:05:24.462247  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:24.503103  181983 main.go:143] libmachine: Using SSH client type: native
	I1211 01:05:24.503499  181983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33009 <nil> <nil>}
	I1211 01:05:24.503508  181983 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 01:05:24.504282  181983 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51112->127.0.0.1:33009: read: connection reset by peer
	I1211 01:05:27.670757  181983 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-174503
	
	I1211 01:05:27.670820  181983 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-174503"
	I1211 01:05:27.670922  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:27.694595  181983 main.go:143] libmachine: Using SSH client type: native
	I1211 01:05:27.694904  181983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33009 <nil> <nil>}
	I1211 01:05:27.694915  181983 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-174503 && echo "kubernetes-upgrade-174503" | sudo tee /etc/hostname
	I1211 01:05:27.884362  181983 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-174503
	
	I1211 01:05:27.884451  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:27.908062  181983 main.go:143] libmachine: Using SSH client type: native
	I1211 01:05:27.908383  181983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33009 <nil> <nil>}
	I1211 01:05:27.908406  181983 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-174503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-174503/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-174503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 01:05:28.095587  181983 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 01:05:28.095613  181983 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 01:05:28.095649  181983 ubuntu.go:190] setting up certificates
	I1211 01:05:28.095658  181983 provision.go:84] configureAuth start
	I1211 01:05:28.095716  181983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-174503
	I1211 01:05:28.131303  181983 provision.go:143] copyHostCerts
	I1211 01:05:28.131383  181983 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 01:05:28.131398  181983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 01:05:28.131465  181983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 01:05:28.131577  181983 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 01:05:28.131588  181983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 01:05:28.131616  181983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 01:05:28.131682  181983 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 01:05:28.131692  181983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 01:05:28.133030  181983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 01:05:28.133183  181983 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-174503 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-174503 localhost minikube]
	I1211 01:05:28.469849  181983 provision.go:177] copyRemoteCerts
	I1211 01:05:28.469965  181983 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 01:05:28.470049  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:28.493314  181983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33009 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/kubernetes-upgrade-174503/id_rsa Username:docker}
	I1211 01:05:28.603716  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 01:05:28.624443  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1211 01:05:28.644986  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 01:05:28.665350  181983 provision.go:87] duration metric: took 569.659229ms to configureAuth
	I1211 01:05:28.665428  181983 ubuntu.go:206] setting minikube options for container-runtime
	I1211 01:05:28.665667  181983 config.go:182] Loaded profile config "kubernetes-upgrade-174503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 01:05:28.665823  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:28.686566  181983 main.go:143] libmachine: Using SSH client type: native
	I1211 01:05:28.686869  181983 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33009 <nil> <nil>}
	I1211 01:05:28.686883  181983 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 01:05:29.070440  181983 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 01:05:29.070460  181983 machine.go:97] duration metric: took 4.608314887s to provisionDockerMachine
	I1211 01:05:29.070472  181983 start.go:293] postStartSetup for "kubernetes-upgrade-174503" (driver="docker")
	I1211 01:05:29.070484  181983 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 01:05:29.070570  181983 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 01:05:29.070617  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:29.087751  181983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33009 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/kubernetes-upgrade-174503/id_rsa Username:docker}
	I1211 01:05:29.191601  181983 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 01:05:29.195607  181983 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 01:05:29.195633  181983 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 01:05:29.195645  181983 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 01:05:29.195699  181983 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 01:05:29.195783  181983 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 01:05:29.195893  181983 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 01:05:29.204314  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:05:29.227383  181983 start.go:296] duration metric: took 156.896339ms for postStartSetup
	I1211 01:05:29.227544  181983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 01:05:29.227625  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:29.251673  181983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33009 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/kubernetes-upgrade-174503/id_rsa Username:docker}
	I1211 01:05:29.356717  181983 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 01:05:29.361926  181983 fix.go:56] duration metric: took 5.279781605s for fixHost
	I1211 01:05:29.361949  181983 start.go:83] releasing machines lock for "kubernetes-upgrade-174503", held for 5.27983114s
	I1211 01:05:29.362024  181983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-174503
	I1211 01:05:29.378278  181983 ssh_runner.go:195] Run: cat /version.json
	I1211 01:05:29.378326  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:29.378570  181983 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 01:05:29.378623  181983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-174503
	I1211 01:05:29.411385  181983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33009 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/kubernetes-upgrade-174503/id_rsa Username:docker}
	I1211 01:05:29.419916  181983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33009 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/kubernetes-upgrade-174503/id_rsa Username:docker}
	I1211 01:05:29.543116  181983 ssh_runner.go:195] Run: systemctl --version
	I1211 01:05:29.644072  181983 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 01:05:29.692090  181983 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 01:05:29.697062  181983 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 01:05:29.697175  181983 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 01:05:29.705928  181983 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 01:05:29.706006  181983 start.go:496] detecting cgroup driver to use...
	I1211 01:05:29.706053  181983 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 01:05:29.706122  181983 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 01:05:29.727579  181983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 01:05:29.742931  181983 docker.go:218] disabling cri-docker service (if available) ...
	I1211 01:05:29.743124  181983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 01:05:29.760059  181983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 01:05:29.775599  181983 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 01:05:29.924667  181983 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 01:05:30.072953  181983 docker.go:234] disabling docker service ...
	I1211 01:05:30.073086  181983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 01:05:30.092907  181983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 01:05:30.113946  181983 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 01:05:30.272170  181983 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 01:05:30.420860  181983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 01:05:30.438947  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 01:05:30.454938  181983 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 01:05:30.455065  181983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:05:30.465201  181983 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 01:05:30.465277  181983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:05:30.477779  181983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:05:30.487451  181983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:05:30.497197  181983 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 01:05:30.506166  181983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:05:30.516238  181983 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:05:30.525603  181983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:05:30.536800  181983 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 01:05:30.545553  181983 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 01:05:30.554108  181983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:05:30.718069  181983 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 01:05:33.233729  181983 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.515579636s)
	I1211 01:05:33.233758  181983 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 01:05:33.233808  181983 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 01:05:33.239083  181983 start.go:564] Will wait 60s for crictl version
	I1211 01:05:33.239142  181983 ssh_runner.go:195] Run: which crictl
	I1211 01:05:33.243246  181983 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 01:05:33.281725  181983 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 01:05:33.281805  181983 ssh_runner.go:195] Run: crio --version
	I1211 01:05:33.328068  181983 ssh_runner.go:195] Run: crio --version
	I1211 01:05:33.379679  181983 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1211 01:05:33.382664  181983 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-174503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 01:05:33.401356  181983 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1211 01:05:33.405137  181983 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 01:05:33.416948  181983 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-174503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-174503 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 01:05:33.417067  181983 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1211 01:05:33.417134  181983 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:05:33.452861  181983 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1211 01:05:33.452936  181983 ssh_runner.go:195] Run: which lz4
	I1211 01:05:33.457450  181983 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 01:05:33.462159  181983 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 01:05:33.462192  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1211 01:05:37.118001  181983 crio.go:462] duration metric: took 3.660591301s to copy over tarball
	I1211 01:05:37.118116  181983 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 01:05:39.575797  181983 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.457648273s)
	I1211 01:05:39.575826  181983 crio.go:469] duration metric: took 2.457781868s to extract the tarball
	I1211 01:05:39.575835  181983 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 01:05:39.663168  181983 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:05:39.700454  181983 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 01:05:39.700477  181983 cache_images.go:86] Images are preloaded, skipping loading
	I1211 01:05:39.700485  181983 kubeadm.go:935] updating node { 192.168.76.2  8443 v1.35.0-beta.0 crio true true} ...
	I1211 01:05:39.700606  181983 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-174503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-174503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 01:05:39.700684  181983 ssh_runner.go:195] Run: crio config
	I1211 01:05:39.775712  181983 cni.go:84] Creating CNI manager for ""
	I1211 01:05:39.775740  181983 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:05:39.775766  181983 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 01:05:39.775789  181983 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-174503 NodeName:kubernetes-upgrade-174503 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 01:05:39.775925  181983 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-174503"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 01:05:39.776004  181983 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1211 01:05:39.784192  181983 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 01:05:39.784310  181983 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 01:05:39.796853  181983 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1211 01:05:39.809727  181983 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1211 01:05:39.823480  181983 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1211 01:05:39.836391  181983 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1211 01:05:39.839972  181983 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 01:05:39.856758  181983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:05:40.035226  181983 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 01:05:40.053824  181983 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503 for IP: 192.168.76.2
	I1211 01:05:40.053892  181983 certs.go:195] generating shared ca certs ...
	I1211 01:05:40.053924  181983 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:05:40.054089  181983 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 01:05:40.054190  181983 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 01:05:40.054232  181983 certs.go:257] generating profile certs ...
	I1211 01:05:40.054359  181983 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/client.key
	I1211 01:05:40.054454  181983 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/apiserver.key.3cb4085f
	I1211 01:05:40.054536  181983 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/proxy-client.key
	I1211 01:05:40.054728  181983 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 01:05:40.054802  181983 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 01:05:40.054839  181983 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 01:05:40.054897  181983 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 01:05:40.054959  181983 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 01:05:40.055094  181983 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 01:05:40.055195  181983 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:05:40.055944  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 01:05:40.079140  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 01:05:40.101877  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 01:05:40.154743  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 01:05:40.183083  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1211 01:05:40.201729  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1211 01:05:40.220546  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 01:05:40.247595  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 01:05:40.270076  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 01:05:40.289368  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 01:05:40.308705  181983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 01:05:40.326229  181983 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 01:05:40.340183  181983 ssh_runner.go:195] Run: openssl version
	I1211 01:05:40.346540  181983 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 01:05:40.354652  181983 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 01:05:40.362258  181983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 01:05:40.366180  181983 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 01:05:40.366298  181983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 01:05:40.407418  181983 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 01:05:40.415022  181983 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:05:40.422504  181983 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 01:05:40.430077  181983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:05:40.434040  181983 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:05:40.434128  181983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:05:40.474939  181983 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 01:05:40.482490  181983 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 01:05:40.490174  181983 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 01:05:40.498600  181983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 01:05:40.502461  181983 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 01:05:40.502568  181983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 01:05:40.544339  181983 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 01:05:40.551905  181983 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 01:05:40.556736  181983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 01:05:40.597985  181983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 01:05:40.644309  181983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 01:05:40.686430  181983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 01:05:40.730615  181983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 01:05:40.772769  181983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 01:05:40.814490  181983 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-174503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-174503 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:05:40.814579  181983 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 01:05:40.814642  181983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 01:05:40.848300  181983 cri.go:89] found id: ""
	I1211 01:05:40.848374  181983 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 01:05:40.857232  181983 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 01:05:40.857254  181983 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 01:05:40.857307  181983 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 01:05:40.865151  181983 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 01:05:40.865569  181983 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-174503" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 01:05:40.865681  181983 kubeconfig.go:62] /home/jenkins/minikube-integration/22061-2739/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-174503" cluster setting kubeconfig missing "kubernetes-upgrade-174503" context setting]
	I1211 01:05:40.865957  181983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:05:40.866487  181983 kapi.go:59] client config for kubernetes-upgrade-174503: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/kubernetes-upgrade-174503/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 01:05:40.867054  181983 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 01:05:40.867077  181983 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 01:05:40.867083  181983 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 01:05:40.867088  181983 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 01:05:40.867093  181983 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 01:05:40.867472  181983 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 01:05:40.878482  181983 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-11 01:04:55.999939531 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-11 01:05:39.831545547 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-174503"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1211 01:05:40.878506  181983 kubeadm.go:1161] stopping kube-system containers ...
	I1211 01:05:40.878520  181983 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1211 01:05:40.878580  181983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 01:05:40.921120  181983 cri.go:89] found id: ""
	I1211 01:05:40.921193  181983 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1211 01:05:40.953147  181983 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 01:05:40.961219  181983 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 11 01:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 11 01:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 11 01:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 11 01:05 /etc/kubernetes/scheduler.conf
	
	I1211 01:05:40.961294  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 01:05:40.969262  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 01:05:40.982177  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 01:05:40.990362  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 01:05:40.990436  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 01:05:40.999439  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 01:05:41.007045  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1211 01:05:41.007109  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 01:05:41.015513  181983 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 01:05:41.024654  181983 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 01:05:41.087836  181983 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 01:05:42.712406  181983 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.624533579s)
	I1211 01:05:42.712484  181983 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1211 01:05:42.951181  181983 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1211 01:05:43.056370  181983 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1211 01:05:43.112483  181983 api_server.go:52] waiting for apiserver process to appear ...
	I1211 01:05:43.112559  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:43.612644  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:44.113248  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:44.613320  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:45.112708  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:45.613369  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:46.113297  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:46.613324  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:47.112658  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:47.613524  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:48.113242  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:48.612683  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:49.113529  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:49.613547  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:50.112655  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:50.613488  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:51.113261  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:51.612683  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:52.112804  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:52.613251  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:53.113629  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:53.612725  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:54.112625  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:54.613074  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:55.112684  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:55.612676  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:56.112675  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:56.613473  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:57.113225  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:57.613083  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:58.112659  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:58.613449  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:59.112683  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:05:59.612663  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:00.116639  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:00.613271  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:01.112632  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:01.613555  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:02.112752  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:02.613394  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:03.112864  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:03.612697  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:04.113534  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:04.613584  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:05.112678  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:05.612680  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:06.112749  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:06.613494  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:07.113706  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:07.612762  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:08.112702  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:08.612662  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:09.113422  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:09.613413  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:10.112644  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:10.612719  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:11.113234  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:11.613141  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:12.112701  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:12.613369  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:13.113355  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:13.613095  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:14.113566  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:14.612662  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:15.112682  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:15.613364  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:16.113310  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:16.613589  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:17.112684  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:17.612680  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:18.112646  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:18.613421  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:19.113426  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:19.613377  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:20.112677  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:20.612758  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:21.112686  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:21.613077  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:22.112984  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:22.613571  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:23.112750  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:23.613535  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:24.113293  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:24.613092  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:25.112752  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:25.612771  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:26.113349  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:26.613102  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:27.112677  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:27.613301  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:28.112677  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:28.613585  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:29.113472  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:29.612777  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:30.113471  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:30.613204  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:31.113565  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:31.612755  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:32.113461  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:32.613197  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:33.113108  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:33.613043  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:34.112696  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:34.613659  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:35.113702  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:35.613612  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:36.113010  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:36.613422  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:37.113243  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:37.612633  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:38.112701  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:38.613523  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:39.113611  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:39.613284  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:40.112937  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:40.612768  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:41.113567  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:41.612841  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:42.112889  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:42.612917  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:43.112672  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:06:43.112787  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:06:43.145424  181983 cri.go:89] found id: ""
	I1211 01:06:43.145445  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.145453  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:06:43.145459  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:06:43.145517  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:06:43.173510  181983 cri.go:89] found id: ""
	I1211 01:06:43.173531  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.173539  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:06:43.173545  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:06:43.173605  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:06:43.211948  181983 cri.go:89] found id: ""
	I1211 01:06:43.211972  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.211981  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:06:43.211988  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:06:43.212045  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:06:43.241670  181983 cri.go:89] found id: ""
	I1211 01:06:43.241693  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.241702  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:06:43.241708  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:06:43.241767  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:06:43.271441  181983 cri.go:89] found id: ""
	I1211 01:06:43.271466  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.271475  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:06:43.271481  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:06:43.271538  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:06:43.297319  181983 cri.go:89] found id: ""
	I1211 01:06:43.297343  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.297352  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:06:43.297357  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:06:43.297416  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:06:43.323244  181983 cri.go:89] found id: ""
	I1211 01:06:43.323266  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.323275  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:06:43.323281  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:06:43.323373  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:06:43.351063  181983 cri.go:89] found id: ""
	I1211 01:06:43.351096  181983 logs.go:282] 0 containers: []
	W1211 01:06:43.351106  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:06:43.351115  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:06:43.351127  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:06:43.420735  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:06:43.420771  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:06:43.436769  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:06:43.436799  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:06:43.668032  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:06:43.668057  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:06:43.668113  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:06:43.698547  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:06:43.698583  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:06:46.228126  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:46.240718  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:06:46.240793  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:06:46.267132  181983 cri.go:89] found id: ""
	I1211 01:06:46.267154  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.267163  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:06:46.267169  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:06:46.267231  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:06:46.292521  181983 cri.go:89] found id: ""
	I1211 01:06:46.292589  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.292607  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:06:46.292614  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:06:46.292687  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:06:46.318835  181983 cri.go:89] found id: ""
	I1211 01:06:46.318868  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.318877  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:06:46.318883  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:06:46.318950  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:06:46.348666  181983 cri.go:89] found id: ""
	I1211 01:06:46.348693  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.348703  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:06:46.348709  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:06:46.348774  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:06:46.375746  181983 cri.go:89] found id: ""
	I1211 01:06:46.375823  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.375847  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:06:46.375862  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:06:46.375942  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:06:46.402856  181983 cri.go:89] found id: ""
	I1211 01:06:46.402891  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.402907  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:06:46.402914  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:06:46.403017  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:06:46.429560  181983 cri.go:89] found id: ""
	I1211 01:06:46.429598  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.429606  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:06:46.429613  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:06:46.429691  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:06:46.455937  181983 cri.go:89] found id: ""
	I1211 01:06:46.455970  181983 logs.go:282] 0 containers: []
	W1211 01:06:46.455981  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:06:46.455991  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:06:46.456005  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:06:46.470502  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:06:46.470531  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:06:46.547590  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:06:46.547656  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:06:46.547688  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:06:46.579354  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:06:46.579384  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:06:46.611762  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:06:46.611787  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:06:49.184620  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:49.195754  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:06:49.195831  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:06:49.227080  181983 cri.go:89] found id: ""
	I1211 01:06:49.227104  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.227113  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:06:49.227119  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:06:49.227178  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:06:49.253447  181983 cri.go:89] found id: ""
	I1211 01:06:49.253472  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.253481  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:06:49.253487  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:06:49.253549  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:06:49.279125  181983 cri.go:89] found id: ""
	I1211 01:06:49.279152  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.279162  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:06:49.279168  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:06:49.279226  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:06:49.304224  181983 cri.go:89] found id: ""
	I1211 01:06:49.304251  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.304273  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:06:49.304280  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:06:49.304343  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:06:49.330430  181983 cri.go:89] found id: ""
	I1211 01:06:49.330469  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.330478  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:06:49.330485  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:06:49.330561  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:06:49.356981  181983 cri.go:89] found id: ""
	I1211 01:06:49.357044  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.357071  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:06:49.357085  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:06:49.357159  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:06:49.384238  181983 cri.go:89] found id: ""
	I1211 01:06:49.384304  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.384328  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:06:49.384348  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:06:49.384426  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:06:49.411069  181983 cri.go:89] found id: ""
	I1211 01:06:49.411144  181983 logs.go:282] 0 containers: []
	W1211 01:06:49.411167  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:06:49.411192  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:06:49.411219  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:06:49.440619  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:06:49.440648  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:06:49.509028  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:06:49.509064  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:06:49.524619  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:06:49.524648  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:06:49.597900  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:06:49.597919  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:06:49.597952  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:06:52.130710  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:52.141548  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:06:52.141631  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:06:52.173681  181983 cri.go:89] found id: ""
	I1211 01:06:52.173718  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.173729  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:06:52.173736  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:06:52.173805  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:06:52.207061  181983 cri.go:89] found id: ""
	I1211 01:06:52.207088  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.207097  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:06:52.207103  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:06:52.207161  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:06:52.234112  181983 cri.go:89] found id: ""
	I1211 01:06:52.234137  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.234145  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:06:52.234151  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:06:52.234216  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:06:52.261737  181983 cri.go:89] found id: ""
	I1211 01:06:52.261761  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.261770  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:06:52.261776  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:06:52.261837  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:06:52.287023  181983 cri.go:89] found id: ""
	I1211 01:06:52.287056  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.287065  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:06:52.287072  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:06:52.287148  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:06:52.312572  181983 cri.go:89] found id: ""
	I1211 01:06:52.312600  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.312610  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:06:52.312617  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:06:52.312677  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:06:52.342284  181983 cri.go:89] found id: ""
	I1211 01:06:52.342316  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.342326  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:06:52.342333  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:06:52.342410  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:06:52.370046  181983 cri.go:89] found id: ""
	I1211 01:06:52.370115  181983 logs.go:282] 0 containers: []
	W1211 01:06:52.370138  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:06:52.370162  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:06:52.370200  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:06:52.438057  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:06:52.438092  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:06:52.452598  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:06:52.452625  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:06:52.517446  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:06:52.517469  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:06:52.517483  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:06:52.548294  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:06:52.548327  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:06:55.079468  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:55.089808  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:06:55.089885  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:06:55.118677  181983 cri.go:89] found id: ""
	I1211 01:06:55.118710  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.118719  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:06:55.118730  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:06:55.118794  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:06:55.167760  181983 cri.go:89] found id: ""
	I1211 01:06:55.167784  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.167794  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:06:55.167800  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:06:55.167870  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:06:55.213409  181983 cri.go:89] found id: ""
	I1211 01:06:55.213432  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.213441  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:06:55.213447  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:06:55.213506  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:06:55.242613  181983 cri.go:89] found id: ""
	I1211 01:06:55.242641  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.242651  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:06:55.242657  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:06:55.242724  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:06:55.268244  181983 cri.go:89] found id: ""
	I1211 01:06:55.268267  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.268275  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:06:55.268281  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:06:55.268344  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:06:55.294998  181983 cri.go:89] found id: ""
	I1211 01:06:55.295021  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.295029  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:06:55.295035  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:06:55.295093  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:06:55.320265  181983 cri.go:89] found id: ""
	I1211 01:06:55.320341  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.320364  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:06:55.320379  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:06:55.320453  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:06:55.346231  181983 cri.go:89] found id: ""
	I1211 01:06:55.346257  181983 logs.go:282] 0 containers: []
	W1211 01:06:55.346266  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:06:55.346275  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:06:55.346286  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:06:55.413430  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:06:55.413462  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:06:55.428347  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:06:55.428374  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:06:55.491393  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:06:55.491423  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:06:55.491435  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:06:55.521693  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:06:55.521726  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:06:58.055161  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:06:58.065656  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:06:58.065738  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:06:58.094166  181983 cri.go:89] found id: ""
	I1211 01:06:58.094189  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.094198  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:06:58.094216  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:06:58.094286  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:06:58.130875  181983 cri.go:89] found id: ""
	I1211 01:06:58.130912  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.130921  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:06:58.130928  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:06:58.131034  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:06:58.182343  181983 cri.go:89] found id: ""
	I1211 01:06:58.182379  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.182395  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:06:58.182401  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:06:58.182469  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:06:58.216769  181983 cri.go:89] found id: ""
	I1211 01:06:58.216791  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.216799  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:06:58.216805  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:06:58.216868  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:06:58.242991  181983 cri.go:89] found id: ""
	I1211 01:06:58.243013  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.243021  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:06:58.243027  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:06:58.243086  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:06:58.270935  181983 cri.go:89] found id: ""
	I1211 01:06:58.270959  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.271001  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:06:58.271008  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:06:58.271071  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:06:58.296148  181983 cri.go:89] found id: ""
	I1211 01:06:58.296169  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.296178  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:06:58.296184  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:06:58.296251  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:06:58.321599  181983 cri.go:89] found id: ""
	I1211 01:06:58.321622  181983 logs.go:282] 0 containers: []
	W1211 01:06:58.321632  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:06:58.321641  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:06:58.321652  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:06:58.352268  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:06:58.352300  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:06:58.383775  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:06:58.383803  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:06:58.450680  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:06:58.450714  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:06:58.465841  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:06:58.465871  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:06:58.528513  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:01.029441  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:01.039667  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:01.039742  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:01.065046  181983 cri.go:89] found id: ""
	I1211 01:07:01.065070  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.065078  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:01.065085  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:01.065145  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:01.095436  181983 cri.go:89] found id: ""
	I1211 01:07:01.095463  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.095472  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:01.095478  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:01.095539  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:01.121890  181983 cri.go:89] found id: ""
	I1211 01:07:01.121917  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.121926  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:01.121932  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:01.121997  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:01.150611  181983 cri.go:89] found id: ""
	I1211 01:07:01.150633  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.150641  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:01.150648  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:01.150707  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:01.185251  181983 cri.go:89] found id: ""
	I1211 01:07:01.185274  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.185283  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:01.185289  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:01.185360  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:01.216564  181983 cri.go:89] found id: ""
	I1211 01:07:01.216588  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.216597  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:01.216603  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:01.216666  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:01.250332  181983 cri.go:89] found id: ""
	I1211 01:07:01.250355  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.250364  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:01.250371  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:01.250433  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:01.276376  181983 cri.go:89] found id: ""
	I1211 01:07:01.276400  181983 logs.go:282] 0 containers: []
	W1211 01:07:01.276409  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:01.276416  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:01.276428  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:01.345220  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:01.345260  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:01.359892  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:01.359924  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:01.429522  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:01.429598  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:01.429627  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:01.471145  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:01.471185  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:04.004467  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:04.018534  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:04.018624  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:04.048089  181983 cri.go:89] found id: ""
	I1211 01:07:04.048114  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.048123  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:04.048129  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:04.048191  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:04.075259  181983 cri.go:89] found id: ""
	I1211 01:07:04.075282  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.075292  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:04.075298  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:04.075360  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:04.105741  181983 cri.go:89] found id: ""
	I1211 01:07:04.105766  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.105776  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:04.105782  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:04.105842  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:04.134716  181983 cri.go:89] found id: ""
	I1211 01:07:04.134742  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.134751  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:04.134758  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:04.134820  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:04.169957  181983 cri.go:89] found id: ""
	I1211 01:07:04.169987  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.169996  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:04.170003  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:04.170065  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:04.199351  181983 cri.go:89] found id: ""
	I1211 01:07:04.199379  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.199388  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:04.199395  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:04.199455  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:04.229145  181983 cri.go:89] found id: ""
	I1211 01:07:04.229173  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.229182  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:04.229189  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:04.229279  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:04.255533  181983 cri.go:89] found id: ""
	I1211 01:07:04.255561  181983 logs.go:282] 0 containers: []
	W1211 01:07:04.255570  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:04.255579  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:04.255590  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:04.322341  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:04.322379  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:04.337447  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:04.337477  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:04.405735  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:04.405758  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:04.405770  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:04.441906  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:04.441939  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:06.975747  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:06.985836  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:06.985906  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:07.013056  181983 cri.go:89] found id: ""
	I1211 01:07:07.013085  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.013095  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:07.013103  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:07.013180  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:07.040006  181983 cri.go:89] found id: ""
	I1211 01:07:07.040033  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.040043  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:07.040049  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:07.040110  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:07.070557  181983 cri.go:89] found id: ""
	I1211 01:07:07.070579  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.070590  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:07.070596  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:07.070656  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:07.098657  181983 cri.go:89] found id: ""
	I1211 01:07:07.098679  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.098687  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:07.098693  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:07.098761  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:07.123698  181983 cri.go:89] found id: ""
	I1211 01:07:07.123718  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.123726  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:07.123732  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:07.123791  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:07.156630  181983 cri.go:89] found id: ""
	I1211 01:07:07.156649  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.156658  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:07.156664  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:07.156728  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:07.185122  181983 cri.go:89] found id: ""
	I1211 01:07:07.185142  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.185150  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:07.185157  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:07.185229  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:07.213147  181983 cri.go:89] found id: ""
	I1211 01:07:07.213219  181983 logs.go:282] 0 containers: []
	W1211 01:07:07.213242  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:07.213265  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:07.213300  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:07.282562  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:07.282597  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:07.296314  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:07.296340  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:07.359587  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:07.359652  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:07.359682  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:07.390878  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:07.390908  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:09.921216  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:09.932215  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:09.932276  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:09.960800  181983 cri.go:89] found id: ""
	I1211 01:07:09.960821  181983 logs.go:282] 0 containers: []
	W1211 01:07:09.960829  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:09.960836  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:09.960891  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:09.987974  181983 cri.go:89] found id: ""
	I1211 01:07:09.987995  181983 logs.go:282] 0 containers: []
	W1211 01:07:09.988004  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:09.988010  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:09.988071  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:10.018694  181983 cri.go:89] found id: ""
	I1211 01:07:10.018725  181983 logs.go:282] 0 containers: []
	W1211 01:07:10.018744  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:10.018751  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:10.018842  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:10.061450  181983 cri.go:89] found id: ""
	I1211 01:07:10.061492  181983 logs.go:282] 0 containers: []
	W1211 01:07:10.061501  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:10.061508  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:10.061587  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:10.094233  181983 cri.go:89] found id: ""
	I1211 01:07:10.094268  181983 logs.go:282] 0 containers: []
	W1211 01:07:10.094278  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:10.094284  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:10.094357  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:10.122437  181983 cri.go:89] found id: ""
	I1211 01:07:10.122471  181983 logs.go:282] 0 containers: []
	W1211 01:07:10.122481  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:10.122488  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:10.122559  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:10.148286  181983 cri.go:89] found id: ""
	I1211 01:07:10.148317  181983 logs.go:282] 0 containers: []
	W1211 01:07:10.148328  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:10.148334  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:10.148408  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:10.184172  181983 cri.go:89] found id: ""
	I1211 01:07:10.184210  181983 logs.go:282] 0 containers: []
	W1211 01:07:10.184219  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:10.184228  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:10.184240  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:10.252673  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:10.252696  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:10.252709  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:10.284330  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:10.284364  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:10.312418  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:10.312493  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:10.379019  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:10.379050  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:12.893491  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:12.903662  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:12.903758  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:12.930562  181983 cri.go:89] found id: ""
	I1211 01:07:12.930633  181983 logs.go:282] 0 containers: []
	W1211 01:07:12.930658  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:12.930679  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:12.930773  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:12.960331  181983 cri.go:89] found id: ""
	I1211 01:07:12.960358  181983 logs.go:282] 0 containers: []
	W1211 01:07:12.960368  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:12.960375  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:12.960434  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:12.988042  181983 cri.go:89] found id: ""
	I1211 01:07:12.988067  181983 logs.go:282] 0 containers: []
	W1211 01:07:12.988076  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:12.988083  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:12.988144  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:13.014005  181983 cri.go:89] found id: ""
	I1211 01:07:13.014090  181983 logs.go:282] 0 containers: []
	W1211 01:07:13.014119  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:13.014135  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:13.014235  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:13.040455  181983 cri.go:89] found id: ""
	I1211 01:07:13.040482  181983 logs.go:282] 0 containers: []
	W1211 01:07:13.040492  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:13.040499  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:13.040559  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:13.066151  181983 cri.go:89] found id: ""
	I1211 01:07:13.066182  181983 logs.go:282] 0 containers: []
	W1211 01:07:13.066195  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:13.066202  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:13.066277  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:13.095994  181983 cri.go:89] found id: ""
	I1211 01:07:13.096061  181983 logs.go:282] 0 containers: []
	W1211 01:07:13.096085  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:13.096105  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:13.096181  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:13.120807  181983 cri.go:89] found id: ""
	I1211 01:07:13.120881  181983 logs.go:282] 0 containers: []
	W1211 01:07:13.120904  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:13.120926  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:13.120945  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:13.153630  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:13.153656  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:13.193702  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:13.193776  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:13.269127  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:13.269163  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:13.283302  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:13.283329  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:13.346037  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:15.847367  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:15.857388  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:15.857463  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:15.883392  181983 cri.go:89] found id: ""
	I1211 01:07:15.883421  181983 logs.go:282] 0 containers: []
	W1211 01:07:15.883430  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:15.883437  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:15.883498  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:15.909655  181983 cri.go:89] found id: ""
	I1211 01:07:15.909679  181983 logs.go:282] 0 containers: []
	W1211 01:07:15.909688  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:15.909694  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:15.909750  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:15.936240  181983 cri.go:89] found id: ""
	I1211 01:07:15.936262  181983 logs.go:282] 0 containers: []
	W1211 01:07:15.936271  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:15.936277  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:15.936336  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:15.960899  181983 cri.go:89] found id: ""
	I1211 01:07:15.960921  181983 logs.go:282] 0 containers: []
	W1211 01:07:15.960930  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:15.960936  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:15.961001  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:15.987674  181983 cri.go:89] found id: ""
	I1211 01:07:15.987698  181983 logs.go:282] 0 containers: []
	W1211 01:07:15.987708  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:15.987715  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:15.987779  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:16.016928  181983 cri.go:89] found id: ""
	I1211 01:07:16.016955  181983 logs.go:282] 0 containers: []
	W1211 01:07:16.016965  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:16.016972  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:16.017043  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:16.042982  181983 cri.go:89] found id: ""
	I1211 01:07:16.043006  181983 logs.go:282] 0 containers: []
	W1211 01:07:16.043016  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:16.043022  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:16.043086  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:16.071930  181983 cri.go:89] found id: ""
	I1211 01:07:16.071954  181983 logs.go:282] 0 containers: []
	W1211 01:07:16.071963  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:16.071972  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:16.071985  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:16.140952  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:16.140972  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:16.140984  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:16.182169  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:16.182202  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:16.215004  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:16.215034  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:16.283981  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:16.284018  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:18.800412  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:18.811306  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:18.811415  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:18.836140  181983 cri.go:89] found id: ""
	I1211 01:07:18.836166  181983 logs.go:282] 0 containers: []
	W1211 01:07:18.836175  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:18.836182  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:18.836243  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:18.861873  181983 cri.go:89] found id: ""
	I1211 01:07:18.861899  181983 logs.go:282] 0 containers: []
	W1211 01:07:18.861908  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:18.861915  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:18.861978  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:18.891834  181983 cri.go:89] found id: ""
	I1211 01:07:18.891859  181983 logs.go:282] 0 containers: []
	W1211 01:07:18.891868  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:18.891874  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:18.891932  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:18.918091  181983 cri.go:89] found id: ""
	I1211 01:07:18.918117  181983 logs.go:282] 0 containers: []
	W1211 01:07:18.918126  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:18.918132  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:18.918191  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:18.947388  181983 cri.go:89] found id: ""
	I1211 01:07:18.947415  181983 logs.go:282] 0 containers: []
	W1211 01:07:18.947424  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:18.947430  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:18.947492  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:18.972571  181983 cri.go:89] found id: ""
	I1211 01:07:18.972594  181983 logs.go:282] 0 containers: []
	W1211 01:07:18.972603  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:18.972610  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:18.972669  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:18.999432  181983 cri.go:89] found id: ""
	I1211 01:07:18.999454  181983 logs.go:282] 0 containers: []
	W1211 01:07:18.999462  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:18.999468  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:18.999531  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:19.033613  181983 cri.go:89] found id: ""
	I1211 01:07:19.033678  181983 logs.go:282] 0 containers: []
	W1211 01:07:19.033702  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:19.033725  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:19.033756  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:19.070193  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:19.070222  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:19.140492  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:19.140528  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:19.156443  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:19.156598  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:19.236108  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:19.236129  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:19.236140  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:21.767310  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:21.776899  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:21.776970  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:21.801871  181983 cri.go:89] found id: ""
	I1211 01:07:21.801897  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.801905  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:21.801911  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:21.801973  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:21.827028  181983 cri.go:89] found id: ""
	I1211 01:07:21.827053  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.827062  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:21.827068  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:21.827132  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:21.852121  181983 cri.go:89] found id: ""
	I1211 01:07:21.852147  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.852156  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:21.852162  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:21.852221  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:21.879453  181983 cri.go:89] found id: ""
	I1211 01:07:21.879477  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.879487  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:21.879493  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:21.879581  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:21.904230  181983 cri.go:89] found id: ""
	I1211 01:07:21.904259  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.904268  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:21.904274  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:21.904332  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:21.930400  181983 cri.go:89] found id: ""
	I1211 01:07:21.930434  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.930443  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:21.930450  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:21.930521  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:21.955667  181983 cri.go:89] found id: ""
	I1211 01:07:21.955732  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.955755  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:21.955775  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:21.955864  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:21.980459  181983 cri.go:89] found id: ""
	I1211 01:07:21.980482  181983 logs.go:282] 0 containers: []
	W1211 01:07:21.980491  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:21.980500  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:21.980510  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:22.010680  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:22.010717  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:22.045762  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:22.045841  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:22.113376  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:22.113412  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:22.128898  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:22.128931  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:22.220349  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:24.721092  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:24.731576  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:24.731684  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:24.759068  181983 cri.go:89] found id: ""
	I1211 01:07:24.759093  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.759102  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:24.759109  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:24.759171  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:24.786015  181983 cri.go:89] found id: ""
	I1211 01:07:24.786081  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.786104  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:24.786125  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:24.786221  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:24.812077  181983 cri.go:89] found id: ""
	I1211 01:07:24.812155  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.812178  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:24.812197  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:24.812291  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:24.837468  181983 cri.go:89] found id: ""
	I1211 01:07:24.837535  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.837558  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:24.837577  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:24.837667  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:24.863701  181983 cri.go:89] found id: ""
	I1211 01:07:24.863786  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.863810  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:24.863831  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:24.863917  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:24.888906  181983 cri.go:89] found id: ""
	I1211 01:07:24.888972  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.888996  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:24.889021  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:24.889107  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:24.915046  181983 cri.go:89] found id: ""
	I1211 01:07:24.915067  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.915076  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:24.915082  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:24.915145  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:24.942324  181983 cri.go:89] found id: ""
	I1211 01:07:24.942393  181983 logs.go:282] 0 containers: []
	W1211 01:07:24.942431  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:24.942458  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:24.942485  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:24.973145  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:24.973179  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:25.000629  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:25.000658  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:25.070054  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:25.070088  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:25.084225  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:25.084255  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:25.158959  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:27.659249  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:27.670336  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:27.670408  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:27.697875  181983 cri.go:89] found id: ""
	I1211 01:07:27.697903  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.697918  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:27.697925  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:27.697982  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:27.733742  181983 cri.go:89] found id: ""
	I1211 01:07:27.733768  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.733776  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:27.733782  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:27.733842  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:27.759274  181983 cri.go:89] found id: ""
	I1211 01:07:27.759351  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.759376  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:27.759391  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:27.759465  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:27.786081  181983 cri.go:89] found id: ""
	I1211 01:07:27.786115  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.786125  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:27.786138  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:27.786212  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:27.810697  181983 cri.go:89] found id: ""
	I1211 01:07:27.810764  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.810790  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:27.810809  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:27.810896  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:27.838114  181983 cri.go:89] found id: ""
	I1211 01:07:27.838139  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.838148  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:27.838154  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:27.838212  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:27.863310  181983 cri.go:89] found id: ""
	I1211 01:07:27.863338  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.863348  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:27.863354  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:27.863427  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:27.893336  181983 cri.go:89] found id: ""
	I1211 01:07:27.893409  181983 logs.go:282] 0 containers: []
	W1211 01:07:27.893440  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:27.893462  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:27.893505  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:27.961358  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:27.961395  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:27.976621  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:27.976650  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:28.044352  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:28.044375  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:28.044390  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:28.076373  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:28.076406  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:30.606502  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:30.616375  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:30.616452  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:30.648660  181983 cri.go:89] found id: ""
	I1211 01:07:30.648682  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.648690  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:30.648697  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:30.648756  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:30.684292  181983 cri.go:89] found id: ""
	I1211 01:07:30.684312  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.684321  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:30.684332  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:30.684388  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:30.713560  181983 cri.go:89] found id: ""
	I1211 01:07:30.713581  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.713589  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:30.713596  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:30.713653  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:30.741292  181983 cri.go:89] found id: ""
	I1211 01:07:30.741315  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.741323  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:30.741330  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:30.741389  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:30.770194  181983 cri.go:89] found id: ""
	I1211 01:07:30.770214  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.770223  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:30.770229  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:30.770288  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:30.794768  181983 cri.go:89] found id: ""
	I1211 01:07:30.794790  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.794798  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:30.794804  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:30.794869  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:30.819594  181983 cri.go:89] found id: ""
	I1211 01:07:30.819668  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.819690  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:30.819706  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:30.819782  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:30.851107  181983 cri.go:89] found id: ""
	I1211 01:07:30.851133  181983 logs.go:282] 0 containers: []
	W1211 01:07:30.851142  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:30.851150  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:30.851161  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:30.880017  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:30.880043  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:30.949204  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:30.949243  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:30.963572  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:30.963602  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:31.029834  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:31.029858  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:31.029879  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:33.562059  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:33.575199  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:33.575281  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:33.610690  181983 cri.go:89] found id: ""
	I1211 01:07:33.610717  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.610727  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:33.610733  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:33.610794  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:33.641329  181983 cri.go:89] found id: ""
	I1211 01:07:33.641355  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.641364  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:33.641371  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:33.641428  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:33.667069  181983 cri.go:89] found id: ""
	I1211 01:07:33.667093  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.667102  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:33.667108  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:33.667177  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:33.692597  181983 cri.go:89] found id: ""
	I1211 01:07:33.692620  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.692628  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:33.692633  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:33.692702  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:33.718678  181983 cri.go:89] found id: ""
	I1211 01:07:33.718708  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.718716  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:33.718722  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:33.718781  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:33.743392  181983 cri.go:89] found id: ""
	I1211 01:07:33.743419  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.743428  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:33.743435  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:33.743496  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:33.768432  181983 cri.go:89] found id: ""
	I1211 01:07:33.768459  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.768468  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:33.768474  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:33.768539  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:33.794115  181983 cri.go:89] found id: ""
	I1211 01:07:33.794141  181983 logs.go:282] 0 containers: []
	W1211 01:07:33.794150  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:33.794159  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:33.794171  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:33.864336  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:33.864383  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:33.879704  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:33.879777  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:33.947069  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:33.947097  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:33.947113  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:33.977899  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:33.977931  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:36.513424  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:36.523777  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:36.523843  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:36.557297  181983 cri.go:89] found id: ""
	I1211 01:07:36.557318  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.557326  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:36.557333  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:36.557394  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:36.597243  181983 cri.go:89] found id: ""
	I1211 01:07:36.597265  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.597273  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:36.597279  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:36.597338  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:36.633093  181983 cri.go:89] found id: ""
	I1211 01:07:36.633151  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.633159  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:36.633168  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:36.633226  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:36.668256  181983 cri.go:89] found id: ""
	I1211 01:07:36.668277  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.668285  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:36.668292  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:36.668355  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:36.716996  181983 cri.go:89] found id: ""
	I1211 01:07:36.717016  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.717024  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:36.717030  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:36.717087  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:36.761400  181983 cri.go:89] found id: ""
	I1211 01:07:36.761424  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.761433  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:36.761439  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:36.761500  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:36.797265  181983 cri.go:89] found id: ""
	I1211 01:07:36.797284  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.797293  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:36.797303  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:36.797367  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:36.837534  181983 cri.go:89] found id: ""
	I1211 01:07:36.837559  181983 logs.go:282] 0 containers: []
	W1211 01:07:36.837568  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:36.837577  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:36.837589  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:36.853791  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:36.853821  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:36.941727  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:36.941748  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:36.941760  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:36.981362  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:36.981398  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:37.018744  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:37.018779  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:39.589695  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:39.603427  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:39.603546  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:39.644026  181983 cri.go:89] found id: ""
	I1211 01:07:39.644100  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.644125  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:39.644145  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:39.644268  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:39.677309  181983 cri.go:89] found id: ""
	I1211 01:07:39.677335  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.677343  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:39.677349  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:39.677411  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:39.713155  181983 cri.go:89] found id: ""
	I1211 01:07:39.713192  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.713207  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:39.713214  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:39.713279  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:39.743592  181983 cri.go:89] found id: ""
	I1211 01:07:39.743626  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.743635  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:39.743641  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:39.743725  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:39.788242  181983 cri.go:89] found id: ""
	I1211 01:07:39.788267  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.788276  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:39.788282  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:39.788352  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:39.824616  181983 cri.go:89] found id: ""
	I1211 01:07:39.824654  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.824663  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:39.824669  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:39.824735  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:39.854135  181983 cri.go:89] found id: ""
	I1211 01:07:39.854176  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.854185  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:39.854193  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:39.854270  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:39.888904  181983 cri.go:89] found id: ""
	I1211 01:07:39.888937  181983 logs.go:282] 0 containers: []
	W1211 01:07:39.888946  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:39.888955  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:39.888966  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:39.924128  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:39.924164  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:39.971659  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:39.971684  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:40.053913  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:40.053954  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:40.069265  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:40.069296  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:40.166864  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:42.667098  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:42.677001  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:42.677080  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:42.703147  181983 cri.go:89] found id: ""
	I1211 01:07:42.703168  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.703177  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:42.703183  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:42.703242  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:42.730004  181983 cri.go:89] found id: ""
	I1211 01:07:42.730028  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.730037  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:42.730043  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:42.730121  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:42.757489  181983 cri.go:89] found id: ""
	I1211 01:07:42.757511  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.757519  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:42.757526  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:42.757587  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:42.785479  181983 cri.go:89] found id: ""
	I1211 01:07:42.785502  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.785510  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:42.785517  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:42.785578  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:42.810732  181983 cri.go:89] found id: ""
	I1211 01:07:42.810753  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.810762  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:42.810768  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:42.810828  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:42.839943  181983 cri.go:89] found id: ""
	I1211 01:07:42.839967  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.839976  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:42.839982  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:42.840040  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:42.869974  181983 cri.go:89] found id: ""
	I1211 01:07:42.870000  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.870009  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:42.870015  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:42.870075  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:42.907983  181983 cri.go:89] found id: ""
	I1211 01:07:42.908009  181983 logs.go:282] 0 containers: []
	W1211 01:07:42.908020  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:42.908029  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:42.908040  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:42.953204  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:42.953232  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:43.027230  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:43.027306  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:43.045580  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:43.045743  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:43.135202  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:43.135262  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:43.135288  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:45.678597  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:45.688902  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:45.688981  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:45.715411  181983 cri.go:89] found id: ""
	I1211 01:07:45.715436  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.715445  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:45.715451  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:45.715516  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:45.742355  181983 cri.go:89] found id: ""
	I1211 01:07:45.742378  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.742387  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:45.742393  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:45.742453  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:45.769521  181983 cri.go:89] found id: ""
	I1211 01:07:45.769545  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.769555  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:45.769561  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:45.769618  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:45.795282  181983 cri.go:89] found id: ""
	I1211 01:07:45.795304  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.795312  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:45.795319  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:45.795379  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:45.820819  181983 cri.go:89] found id: ""
	I1211 01:07:45.820843  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.820853  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:45.820859  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:45.820918  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:45.847124  181983 cri.go:89] found id: ""
	I1211 01:07:45.847155  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.847165  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:45.847172  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:45.847236  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:45.876086  181983 cri.go:89] found id: ""
	I1211 01:07:45.876113  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.876122  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:45.876129  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:45.876188  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:45.904211  181983 cri.go:89] found id: ""
	I1211 01:07:45.904234  181983 logs.go:282] 0 containers: []
	W1211 01:07:45.904244  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:45.904253  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:45.904265  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:45.976634  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:45.976673  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:45.990842  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:45.990869  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:46.063601  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:46.063622  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:46.063638  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:46.095162  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:46.095198  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:48.639378  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:48.649562  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:48.649643  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:48.675712  181983 cri.go:89] found id: ""
	I1211 01:07:48.675735  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.675744  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:48.675749  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:48.675809  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:48.701343  181983 cri.go:89] found id: ""
	I1211 01:07:48.701369  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.701378  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:48.701384  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:48.701442  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:48.731181  181983 cri.go:89] found id: ""
	I1211 01:07:48.731203  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.731212  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:48.731218  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:48.731278  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:48.757637  181983 cri.go:89] found id: ""
	I1211 01:07:48.757661  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.757671  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:48.757678  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:48.757746  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:48.784479  181983 cri.go:89] found id: ""
	I1211 01:07:48.784502  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.784517  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:48.784523  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:48.784582  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:48.810331  181983 cri.go:89] found id: ""
	I1211 01:07:48.810357  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.810365  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:48.810371  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:48.810447  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:48.836687  181983 cri.go:89] found id: ""
	I1211 01:07:48.836714  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.836723  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:48.836730  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:48.836792  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:48.861575  181983 cri.go:89] found id: ""
	I1211 01:07:48.861602  181983 logs.go:282] 0 containers: []
	W1211 01:07:48.861611  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:48.861658  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:48.861674  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:48.928466  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:48.928501  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:48.944700  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:48.944727  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:49.011740  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:49.011763  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:49.011778  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:49.044389  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:49.044463  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:51.586894  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:51.597934  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:51.598033  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:51.625913  181983 cri.go:89] found id: ""
	I1211 01:07:51.625939  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.625948  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:51.625954  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:51.626044  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:51.653334  181983 cri.go:89] found id: ""
	I1211 01:07:51.653361  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.653370  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:51.653376  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:51.653433  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:51.681944  181983 cri.go:89] found id: ""
	I1211 01:07:51.681970  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.681991  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:51.681998  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:51.682065  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:51.713656  181983 cri.go:89] found id: ""
	I1211 01:07:51.713682  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.713691  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:51.713698  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:51.713754  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:51.748354  181983 cri.go:89] found id: ""
	I1211 01:07:51.748380  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.748389  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:51.748396  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:51.748453  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:51.775809  181983 cri.go:89] found id: ""
	I1211 01:07:51.775835  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.775846  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:51.775852  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:51.775910  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:51.803308  181983 cri.go:89] found id: ""
	I1211 01:07:51.803333  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.803342  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:51.803348  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:51.803403  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:51.833233  181983 cri.go:89] found id: ""
	I1211 01:07:51.833260  181983 logs.go:282] 0 containers: []
	W1211 01:07:51.833269  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:51.833277  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:51.833288  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:51.911504  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:51.911541  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:51.926614  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:51.926644  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:52.030178  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:52.030201  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:52.030213  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:52.066476  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:52.066517  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:54.602059  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:54.612127  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:54.612226  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:54.640694  181983 cri.go:89] found id: ""
	I1211 01:07:54.640716  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.640725  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:54.640732  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:54.640793  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:54.668029  181983 cri.go:89] found id: ""
	I1211 01:07:54.668052  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.668061  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:54.668067  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:54.668140  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:54.693313  181983 cri.go:89] found id: ""
	I1211 01:07:54.693377  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.693400  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:54.693420  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:54.693507  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:54.719362  181983 cri.go:89] found id: ""
	I1211 01:07:54.719388  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.719408  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:54.719416  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:54.719479  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:54.745292  181983 cri.go:89] found id: ""
	I1211 01:07:54.745373  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.745398  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:54.745418  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:54.745528  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:54.774833  181983 cri.go:89] found id: ""
	I1211 01:07:54.774905  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.774929  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:54.774949  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:54.775067  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:54.799533  181983 cri.go:89] found id: ""
	I1211 01:07:54.799558  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.799569  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:54.799576  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:54.799679  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:54.831080  181983 cri.go:89] found id: ""
	I1211 01:07:54.831156  181983 logs.go:282] 0 containers: []
	W1211 01:07:54.831180  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:54.831205  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:54.831234  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:07:54.908248  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:54.908288  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:54.923768  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:54.923924  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:54.989724  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:54.989744  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:54.989756  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:55.021059  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:55.021101  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:57.565103  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:07:57.574789  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:07:57.574860  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:07:57.599677  181983 cri.go:89] found id: ""
	I1211 01:07:57.599700  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.599709  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:07:57.599715  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:07:57.599772  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:07:57.624707  181983 cri.go:89] found id: ""
	I1211 01:07:57.624728  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.624737  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:07:57.624743  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:07:57.624817  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:07:57.650042  181983 cri.go:89] found id: ""
	I1211 01:07:57.650069  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.650077  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:07:57.650083  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:07:57.650140  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:07:57.678068  181983 cri.go:89] found id: ""
	I1211 01:07:57.678094  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.678103  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:07:57.678109  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:07:57.678168  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:07:57.704285  181983 cri.go:89] found id: ""
	I1211 01:07:57.704307  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.704316  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:07:57.704322  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:07:57.704382  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:07:57.730652  181983 cri.go:89] found id: ""
	I1211 01:07:57.730722  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.730745  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:07:57.730765  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:07:57.730850  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:07:57.757249  181983 cri.go:89] found id: ""
	I1211 01:07:57.757316  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.757340  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:07:57.757361  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:07:57.757443  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:07:57.783483  181983 cri.go:89] found id: ""
	I1211 01:07:57.783549  181983 logs.go:282] 0 containers: []
	W1211 01:07:57.783572  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:07:57.783594  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:07:57.783631  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:07:57.806410  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:07:57.806437  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:07:57.898052  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:07:57.898072  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:07:57.898084  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:07:57.931948  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:07:57.931980  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:07:57.982379  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:07:57.982404  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:00.560366  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:00.571751  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:00.571834  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:00.599137  181983 cri.go:89] found id: ""
	I1211 01:08:00.599165  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.599174  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:00.599181  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:00.599243  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:00.625741  181983 cri.go:89] found id: ""
	I1211 01:08:00.625810  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.625838  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:00.625863  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:00.625962  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:00.653720  181983 cri.go:89] found id: ""
	I1211 01:08:00.653753  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.653765  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:00.653776  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:00.653853  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:00.684418  181983 cri.go:89] found id: ""
	I1211 01:08:00.684483  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.684499  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:00.684507  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:00.684569  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:00.715151  181983 cri.go:89] found id: ""
	I1211 01:08:00.715172  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.715181  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:00.715187  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:00.715247  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:00.747860  181983 cri.go:89] found id: ""
	I1211 01:08:00.747884  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.747892  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:00.747899  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:00.747957  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:00.774588  181983 cri.go:89] found id: ""
	I1211 01:08:00.774613  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.774622  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:00.774629  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:00.774687  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:00.801403  181983 cri.go:89] found id: ""
	I1211 01:08:00.801426  181983 logs.go:282] 0 containers: []
	W1211 01:08:00.801435  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:00.801444  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:00.801476  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:00.870161  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:00.870185  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:00.870199  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:00.901331  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:00.901367  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:00.933198  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:00.933230  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:01.001805  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:01.001845  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:03.517305  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:03.527387  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:03.527468  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:03.556046  181983 cri.go:89] found id: ""
	I1211 01:08:03.556083  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.556092  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:03.556098  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:03.556170  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:03.582083  181983 cri.go:89] found id: ""
	I1211 01:08:03.582105  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.582115  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:03.582123  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:03.582195  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:03.607462  181983 cri.go:89] found id: ""
	I1211 01:08:03.607496  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.607505  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:03.607527  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:03.607606  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:03.632478  181983 cri.go:89] found id: ""
	I1211 01:08:03.632500  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.632508  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:03.632515  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:03.632579  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:03.658197  181983 cri.go:89] found id: ""
	I1211 01:08:03.658230  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.658239  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:03.658245  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:03.658316  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:03.683675  181983 cri.go:89] found id: ""
	I1211 01:08:03.683702  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.683711  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:03.683718  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:03.683807  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:03.709705  181983 cri.go:89] found id: ""
	I1211 01:08:03.709769  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.709783  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:03.709791  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:03.709850  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:03.740507  181983 cri.go:89] found id: ""
	I1211 01:08:03.740584  181983 logs.go:282] 0 containers: []
	W1211 01:08:03.740599  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:03.740609  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:03.740622  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:03.771040  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:03.771086  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:03.841655  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:03.841700  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:03.856615  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:03.856647  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:03.925784  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:03.925857  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:03.925877  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:06.457384  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:06.467585  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:06.467659  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:06.493490  181983 cri.go:89] found id: ""
	I1211 01:08:06.493512  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.493521  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:06.493527  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:06.493588  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:06.524120  181983 cri.go:89] found id: ""
	I1211 01:08:06.524147  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.524156  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:06.524162  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:06.524249  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:06.552539  181983 cri.go:89] found id: ""
	I1211 01:08:06.552563  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.552572  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:06.552579  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:06.552639  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:06.582481  181983 cri.go:89] found id: ""
	I1211 01:08:06.582504  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.582512  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:06.582518  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:06.582581  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:06.608867  181983 cri.go:89] found id: ""
	I1211 01:08:06.608938  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.608961  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:06.608982  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:06.609066  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:06.638344  181983 cri.go:89] found id: ""
	I1211 01:08:06.638366  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.638375  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:06.638380  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:06.638444  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:06.665542  181983 cri.go:89] found id: ""
	I1211 01:08:06.665609  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.665634  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:06.665661  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:06.665733  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:06.695354  181983 cri.go:89] found id: ""
	I1211 01:08:06.695380  181983 logs.go:282] 0 containers: []
	W1211 01:08:06.695389  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:06.695398  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:06.695430  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:06.726044  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:06.726076  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:06.758358  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:06.758386  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:06.825670  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:06.825751  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:06.840469  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:06.840498  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:06.906174  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:09.406397  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:09.416832  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:09.416906  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:09.447396  181983 cri.go:89] found id: ""
	I1211 01:08:09.447433  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.447443  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:09.447454  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:09.447544  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:09.482797  181983 cri.go:89] found id: ""
	I1211 01:08:09.482823  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.482833  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:09.482839  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:09.482895  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:09.508480  181983 cri.go:89] found id: ""
	I1211 01:08:09.508507  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.508516  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:09.508522  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:09.508580  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:09.533483  181983 cri.go:89] found id: ""
	I1211 01:08:09.533510  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.533520  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:09.533526  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:09.533580  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:09.558432  181983 cri.go:89] found id: ""
	I1211 01:08:09.558458  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.558467  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:09.558474  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:09.558534  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:09.587590  181983 cri.go:89] found id: ""
	I1211 01:08:09.587614  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.587624  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:09.587631  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:09.587693  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:09.614054  181983 cri.go:89] found id: ""
	I1211 01:08:09.614080  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.614088  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:09.614095  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:09.614155  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:09.638959  181983 cri.go:89] found id: ""
	I1211 01:08:09.639012  181983 logs.go:282] 0 containers: []
	W1211 01:08:09.639020  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:09.639030  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:09.639042  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:09.653445  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:09.653475  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:09.723033  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:09.723055  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:09.723096  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:09.754516  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:09.754548  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:09.786234  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:09.786259  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:12.356974  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:12.367016  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:12.367082  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:12.408768  181983 cri.go:89] found id: ""
	I1211 01:08:12.408795  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.408804  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:12.408810  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:12.408867  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:12.491650  181983 cri.go:89] found id: ""
	I1211 01:08:12.491677  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.491686  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:12.491693  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:12.491749  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:12.520705  181983 cri.go:89] found id: ""
	I1211 01:08:12.520733  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.520743  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:12.520749  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:12.520807  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:12.552701  181983 cri.go:89] found id: ""
	I1211 01:08:12.552725  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.552734  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:12.552741  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:12.552801  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:12.581099  181983 cri.go:89] found id: ""
	I1211 01:08:12.581122  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.581132  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:12.581170  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:12.581251  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:12.609553  181983 cri.go:89] found id: ""
	I1211 01:08:12.609577  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.609585  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:12.609591  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:12.609648  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:12.634541  181983 cri.go:89] found id: ""
	I1211 01:08:12.634564  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.634572  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:12.634578  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:12.634635  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:12.659195  181983 cri.go:89] found id: ""
	I1211 01:08:12.659220  181983 logs.go:282] 0 containers: []
	W1211 01:08:12.659228  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:12.659237  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:12.659247  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:12.725833  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:12.725867  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:12.741167  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:12.741195  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:12.806443  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:12.806462  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:12.806474  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:12.836848  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:12.836886  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:15.369435  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:15.379931  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:15.380002  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:15.442882  181983 cri.go:89] found id: ""
	I1211 01:08:15.442909  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.442918  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:15.442924  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:15.443017  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:15.498847  181983 cri.go:89] found id: ""
	I1211 01:08:15.498869  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.498877  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:15.498883  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:15.498948  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:15.539763  181983 cri.go:89] found id: ""
	I1211 01:08:15.539785  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.539793  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:15.539802  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:15.539862  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:15.578868  181983 cri.go:89] found id: ""
	I1211 01:08:15.578890  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.578900  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:15.578906  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:15.578987  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:15.617213  181983 cri.go:89] found id: ""
	I1211 01:08:15.617295  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.617319  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:15.617340  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:15.617425  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:15.649526  181983 cri.go:89] found id: ""
	I1211 01:08:15.649552  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.649561  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:15.649593  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:15.649678  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:15.701231  181983 cri.go:89] found id: ""
	I1211 01:08:15.701265  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.701275  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:15.701298  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:15.701381  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:15.736473  181983 cri.go:89] found id: ""
	I1211 01:08:15.736498  181983 logs.go:282] 0 containers: []
	W1211 01:08:15.736507  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:15.736517  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:15.736552  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:15.824817  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:15.824854  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:15.839320  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:15.839395  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:15.936214  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:15.936234  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:15.936246  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:15.971880  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:15.971909  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:18.503128  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:18.514779  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:18.514861  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:18.561963  181983 cri.go:89] found id: ""
	I1211 01:08:18.561999  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.562009  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:18.562016  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:18.562083  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:18.595272  181983 cri.go:89] found id: ""
	I1211 01:08:18.595307  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.595317  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:18.595323  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:18.595399  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:18.627662  181983 cri.go:89] found id: ""
	I1211 01:08:18.627698  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.627707  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:18.627713  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:18.627780  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:18.659549  181983 cri.go:89] found id: ""
	I1211 01:08:18.659580  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.659590  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:18.659596  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:18.659661  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:18.694631  181983 cri.go:89] found id: ""
	I1211 01:08:18.694667  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.694682  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:18.694689  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:18.694760  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:18.735345  181983 cri.go:89] found id: ""
	I1211 01:08:18.735380  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.735389  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:18.735395  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:18.735464  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:18.764352  181983 cri.go:89] found id: ""
	I1211 01:08:18.764377  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.764394  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:18.764400  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:18.764466  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:18.794755  181983 cri.go:89] found id: ""
	I1211 01:08:18.794792  181983 logs.go:282] 0 containers: []
	W1211 01:08:18.794801  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:18.794810  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:18.794821  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:18.833532  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:18.833560  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:18.915715  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:18.915759  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:18.933709  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:18.933818  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:19.027330  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:19.027347  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:19.027362  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:21.576990  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:21.587187  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:21.587260  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:21.615419  181983 cri.go:89] found id: ""
	I1211 01:08:21.615444  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.615453  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:21.615462  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:21.615526  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:21.640837  181983 cri.go:89] found id: ""
	I1211 01:08:21.640863  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.640872  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:21.640878  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:21.640936  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:21.670435  181983 cri.go:89] found id: ""
	I1211 01:08:21.670460  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.670470  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:21.670476  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:21.670539  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:21.699366  181983 cri.go:89] found id: ""
	I1211 01:08:21.699391  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.699409  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:21.699417  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:21.699475  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:21.729629  181983 cri.go:89] found id: ""
	I1211 01:08:21.729656  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.729665  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:21.729671  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:21.729729  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:21.755283  181983 cri.go:89] found id: ""
	I1211 01:08:21.755308  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.755317  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:21.755331  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:21.755390  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:21.780588  181983 cri.go:89] found id: ""
	I1211 01:08:21.780618  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.780628  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:21.780634  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:21.780692  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:21.806321  181983 cri.go:89] found id: ""
	I1211 01:08:21.806358  181983 logs.go:282] 0 containers: []
	W1211 01:08:21.806367  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:21.806393  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:21.806410  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:21.873104  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:21.873139  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:21.888119  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:21.888147  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:21.960362  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:21.960384  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:21.960397  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:21.991257  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:21.991290  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:24.527777  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:24.537994  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:24.538062  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:24.563385  181983 cri.go:89] found id: ""
	I1211 01:08:24.563407  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.563416  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:24.563422  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:24.563479  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:24.588575  181983 cri.go:89] found id: ""
	I1211 01:08:24.588601  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.588610  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:24.588617  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:24.588679  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:24.614493  181983 cri.go:89] found id: ""
	I1211 01:08:24.614518  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.614528  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:24.614541  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:24.614600  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:24.642134  181983 cri.go:89] found id: ""
	I1211 01:08:24.642159  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.642168  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:24.642176  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:24.642238  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:24.668328  181983 cri.go:89] found id: ""
	I1211 01:08:24.668355  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.668365  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:24.668372  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:24.668436  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:24.693384  181983 cri.go:89] found id: ""
	I1211 01:08:24.693413  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.693422  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:24.693429  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:24.693488  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:24.719811  181983 cri.go:89] found id: ""
	I1211 01:08:24.719834  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.719844  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:24.719851  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:24.719912  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:24.744885  181983 cri.go:89] found id: ""
	I1211 01:08:24.744908  181983 logs.go:282] 0 containers: []
	W1211 01:08:24.744918  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:24.744926  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:24.744938  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:24.816588  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:24.816623  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:24.831128  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:24.831158  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:24.894680  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:24.894702  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:24.894714  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:24.925593  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:24.925624  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:27.459084  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:27.470923  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:27.471098  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:27.501253  181983 cri.go:89] found id: ""
	I1211 01:08:27.501277  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.501285  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:27.501292  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:27.501351  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:27.529831  181983 cri.go:89] found id: ""
	I1211 01:08:27.529855  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.529864  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:27.529870  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:27.529930  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:27.558721  181983 cri.go:89] found id: ""
	I1211 01:08:27.558749  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.558759  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:27.558765  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:27.558826  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:27.583458  181983 cri.go:89] found id: ""
	I1211 01:08:27.583481  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.583491  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:27.583497  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:27.583555  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:27.608819  181983 cri.go:89] found id: ""
	I1211 01:08:27.608841  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.608851  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:27.608857  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:27.608918  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:27.635690  181983 cri.go:89] found id: ""
	I1211 01:08:27.635719  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.635729  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:27.635740  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:27.635815  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:27.663783  181983 cri.go:89] found id: ""
	I1211 01:08:27.663809  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.663819  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:27.663828  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:27.663891  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:27.690708  181983 cri.go:89] found id: ""
	I1211 01:08:27.690729  181983 logs.go:282] 0 containers: []
	W1211 01:08:27.690737  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:27.690746  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:27.690757  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:27.705416  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:27.705443  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:27.777653  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:27.777672  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:27.777684  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:27.808363  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:27.808402  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:27.838029  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:27.838055  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:30.406305  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:30.417363  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:30.417440  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:30.448070  181983 cri.go:89] found id: ""
	I1211 01:08:30.448094  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.448104  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:30.448111  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:30.448180  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:30.482751  181983 cri.go:89] found id: ""
	I1211 01:08:30.482775  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.482784  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:30.482790  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:30.482850  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:30.507821  181983 cri.go:89] found id: ""
	I1211 01:08:30.507846  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.507855  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:30.507862  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:30.507922  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:30.532927  181983 cri.go:89] found id: ""
	I1211 01:08:30.532950  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.532959  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:30.532965  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:30.533023  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:30.561831  181983 cri.go:89] found id: ""
	I1211 01:08:30.561855  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.561865  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:30.561871  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:30.561927  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:30.586047  181983 cri.go:89] found id: ""
	I1211 01:08:30.586072  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.586081  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:30.586087  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:30.586145  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:30.611296  181983 cri.go:89] found id: ""
	I1211 01:08:30.611318  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.611326  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:30.611332  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:30.611390  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:30.638039  181983 cri.go:89] found id: ""
	I1211 01:08:30.638064  181983 logs.go:282] 0 containers: []
	W1211 01:08:30.638073  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:30.638082  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:30.638093  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:30.705398  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:30.705436  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:30.720756  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:30.720791  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:30.787383  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:30.787427  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:30.787440  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:30.819284  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:30.819318  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:33.354097  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:33.364012  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:33.364081  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:33.389917  181983 cri.go:89] found id: ""
	I1211 01:08:33.389938  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.389947  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:33.389953  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:33.390013  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:33.420357  181983 cri.go:89] found id: ""
	I1211 01:08:33.420379  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.420389  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:33.420402  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:33.420461  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:33.452018  181983 cri.go:89] found id: ""
	I1211 01:08:33.452039  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.452047  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:33.452053  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:33.452139  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:33.489992  181983 cri.go:89] found id: ""
	I1211 01:08:33.490015  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.490023  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:33.490029  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:33.490090  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:33.514711  181983 cri.go:89] found id: ""
	I1211 01:08:33.514732  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.514741  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:33.514747  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:33.514806  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:33.539894  181983 cri.go:89] found id: ""
	I1211 01:08:33.539916  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.539929  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:33.539936  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:33.539996  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:33.564307  181983 cri.go:89] found id: ""
	I1211 01:08:33.564328  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.564337  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:33.564343  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:33.564405  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:33.589563  181983 cri.go:89] found id: ""
	I1211 01:08:33.589585  181983 logs.go:282] 0 containers: []
	W1211 01:08:33.589594  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:33.589603  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:33.589615  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:33.663383  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:33.663427  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:33.677605  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:33.677635  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:33.742694  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:33.742759  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:33.742780  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:33.774765  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:33.774801  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:36.307152  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:36.316798  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:36.316874  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:36.347651  181983 cri.go:89] found id: ""
	I1211 01:08:36.347679  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.347688  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:36.347694  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:36.347753  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:36.376857  181983 cri.go:89] found id: ""
	I1211 01:08:36.376883  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.376892  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:36.376898  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:36.376956  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:36.411349  181983 cri.go:89] found id: ""
	I1211 01:08:36.411371  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.411381  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:36.411387  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:36.411451  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:36.444161  181983 cri.go:89] found id: ""
	I1211 01:08:36.444189  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.444198  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:36.444205  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:36.444271  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:36.484392  181983 cri.go:89] found id: ""
	I1211 01:08:36.484417  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.484426  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:36.484432  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:36.484489  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:36.512320  181983 cri.go:89] found id: ""
	I1211 01:08:36.512343  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.512352  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:36.512358  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:36.512423  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:36.541326  181983 cri.go:89] found id: ""
	I1211 01:08:36.541360  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.541369  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:36.541375  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:36.541436  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:36.567111  181983 cri.go:89] found id: ""
	I1211 01:08:36.567141  181983 logs.go:282] 0 containers: []
	W1211 01:08:36.567150  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:36.567159  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:36.567171  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:36.633796  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:36.633831  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:36.649190  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:36.649224  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:36.711483  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:36.711505  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:36.711534  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:36.742124  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:36.742155  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:39.270776  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:39.281234  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:39.281309  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:39.307378  181983 cri.go:89] found id: ""
	I1211 01:08:39.307401  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.307410  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:39.307416  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:39.307480  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:39.338872  181983 cri.go:89] found id: ""
	I1211 01:08:39.338898  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.338907  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:39.338913  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:39.339010  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:39.365060  181983 cri.go:89] found id: ""
	I1211 01:08:39.365089  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.365099  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:39.365106  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:39.365167  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:39.391757  181983 cri.go:89] found id: ""
	I1211 01:08:39.391780  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.391789  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:39.391796  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:39.391852  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:39.430212  181983 cri.go:89] found id: ""
	I1211 01:08:39.430238  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.430247  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:39.430253  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:39.430313  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:39.457526  181983 cri.go:89] found id: ""
	I1211 01:08:39.457550  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.457559  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:39.457569  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:39.457635  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:39.488625  181983 cri.go:89] found id: ""
	I1211 01:08:39.488651  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.488661  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:39.488667  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:39.488726  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:39.513867  181983 cri.go:89] found id: ""
	I1211 01:08:39.513892  181983 logs.go:282] 0 containers: []
	W1211 01:08:39.513901  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:39.513910  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:39.513922  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:39.582095  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:39.582128  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:39.596940  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:39.596975  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:39.666215  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:39.666283  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:39.666316  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:39.698140  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:39.698174  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:42.228006  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:42.245265  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:42.245348  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:42.291869  181983 cri.go:89] found id: ""
	I1211 01:08:42.291895  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.291905  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:42.291918  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:42.291989  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:42.338315  181983 cri.go:89] found id: ""
	I1211 01:08:42.338342  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.338352  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:42.338358  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:42.338425  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:42.370137  181983 cri.go:89] found id: ""
	I1211 01:08:42.370164  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.370173  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:42.370179  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:42.370244  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:42.398102  181983 cri.go:89] found id: ""
	I1211 01:08:42.398130  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.398139  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:42.398145  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:42.398205  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:42.486708  181983 cri.go:89] found id: ""
	I1211 01:08:42.486740  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.486749  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:42.486755  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:42.486815  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:42.530072  181983 cri.go:89] found id: ""
	I1211 01:08:42.530099  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.530109  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:42.530115  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:42.530173  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:42.571720  181983 cri.go:89] found id: ""
	I1211 01:08:42.571747  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.571756  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:42.571763  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:42.571820  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:42.605898  181983 cri.go:89] found id: ""
	I1211 01:08:42.605925  181983 logs.go:282] 0 containers: []
	W1211 01:08:42.605934  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:42.605943  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:42.605958  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:42.645438  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:42.645465  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:42.770132  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:42.770168  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:42.786834  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:42.786867  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:42.927922  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:42.927946  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:42.927961  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:45.467178  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:45.478523  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:45.478594  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:45.510758  181983 cri.go:89] found id: ""
	I1211 01:08:45.510780  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.510788  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:45.510794  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:45.510853  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:45.545166  181983 cri.go:89] found id: ""
	I1211 01:08:45.545194  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.545203  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:45.545209  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:45.545267  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:45.580137  181983 cri.go:89] found id: ""
	I1211 01:08:45.580162  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.580172  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:45.580178  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:45.580235  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:45.607912  181983 cri.go:89] found id: ""
	I1211 01:08:45.607940  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.607949  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:45.607955  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:45.608013  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:45.637459  181983 cri.go:89] found id: ""
	I1211 01:08:45.637485  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.637495  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:45.637501  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:45.637562  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:45.668274  181983 cri.go:89] found id: ""
	I1211 01:08:45.668302  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.668311  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:45.668317  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:45.668440  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:45.705275  181983 cri.go:89] found id: ""
	I1211 01:08:45.705303  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.705312  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:45.705318  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:45.705377  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:45.738358  181983 cri.go:89] found id: ""
	I1211 01:08:45.738385  181983 logs.go:282] 0 containers: []
	W1211 01:08:45.738394  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:45.738404  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:45.738415  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:45.816339  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:45.816366  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:45.837885  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:45.837914  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:45.932332  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:45.932353  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:45.932367  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:45.971070  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:45.971142  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:48.504962  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:48.515124  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:48.515200  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:48.541328  181983 cri.go:89] found id: ""
	I1211 01:08:48.541352  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.541362  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:48.541374  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:48.541436  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:48.567925  181983 cri.go:89] found id: ""
	I1211 01:08:48.567948  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.567957  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:48.567964  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:48.568024  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:48.593688  181983 cri.go:89] found id: ""
	I1211 01:08:48.593711  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.593719  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:48.593725  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:48.593793  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:48.623569  181983 cri.go:89] found id: ""
	I1211 01:08:48.623594  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.623604  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:48.623610  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:48.623677  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:48.653544  181983 cri.go:89] found id: ""
	I1211 01:08:48.653569  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.653580  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:48.653586  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:48.653648  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:48.679445  181983 cri.go:89] found id: ""
	I1211 01:08:48.679472  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.679481  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:48.679489  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:48.679550  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:48.706110  181983 cri.go:89] found id: ""
	I1211 01:08:48.706137  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.706146  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:48.706152  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:48.706211  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:48.732069  181983 cri.go:89] found id: ""
	I1211 01:08:48.732092  181983 logs.go:282] 0 containers: []
	W1211 01:08:48.732100  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:48.732109  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:48.732120  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:48.799347  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:48.799380  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:48.813777  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:48.813808  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:48.877498  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:48.877518  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:48.877532  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:48.908880  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:48.908915  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:51.440025  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:51.450656  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:51.450728  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:51.480063  181983 cri.go:89] found id: ""
	I1211 01:08:51.480088  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.480097  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:51.480103  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:51.480161  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:51.506919  181983 cri.go:89] found id: ""
	I1211 01:08:51.506944  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.506953  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:51.506959  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:51.507071  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:51.534800  181983 cri.go:89] found id: ""
	I1211 01:08:51.534827  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.534837  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:51.534844  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:51.534903  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:51.562924  181983 cri.go:89] found id: ""
	I1211 01:08:51.562950  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.562960  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:51.563021  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:51.563084  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:51.590125  181983 cri.go:89] found id: ""
	I1211 01:08:51.590151  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.590161  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:51.590167  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:51.590227  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:51.616796  181983 cri.go:89] found id: ""
	I1211 01:08:51.616828  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.616843  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:51.616850  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:51.616932  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:51.643663  181983 cri.go:89] found id: ""
	I1211 01:08:51.643693  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.643702  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:51.643709  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:51.643771  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:51.669866  181983 cri.go:89] found id: ""
	I1211 01:08:51.669890  181983 logs.go:282] 0 containers: []
	W1211 01:08:51.669899  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:51.669909  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:51.669921  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:51.737799  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:51.737830  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:51.752557  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:51.752637  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:51.821079  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:51.821142  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:51.821170  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:51.851933  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:51.851964  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:54.386924  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:54.397913  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:54.397988  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:54.440245  181983 cri.go:89] found id: ""
	I1211 01:08:54.440268  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.440277  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:54.440284  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:54.440342  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:54.483471  181983 cri.go:89] found id: ""
	I1211 01:08:54.483493  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.483501  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:54.483507  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:54.483566  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:54.509220  181983 cri.go:89] found id: ""
	I1211 01:08:54.509245  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.509254  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:54.509260  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:54.509318  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:54.535488  181983 cri.go:89] found id: ""
	I1211 01:08:54.535509  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.535518  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:54.535524  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:54.535593  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:54.561702  181983 cri.go:89] found id: ""
	I1211 01:08:54.561724  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.561733  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:54.561743  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:54.561804  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:54.586996  181983 cri.go:89] found id: ""
	I1211 01:08:54.587020  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.587030  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:54.587036  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:54.587094  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:54.612569  181983 cri.go:89] found id: ""
	I1211 01:08:54.612592  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.612600  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:54.612607  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:54.612664  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:54.637567  181983 cri.go:89] found id: ""
	I1211 01:08:54.637593  181983 logs.go:282] 0 containers: []
	W1211 01:08:54.637604  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:54.637612  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:54.637624  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:54.668273  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:54.668306  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:54.699619  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:54.699645  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:08:54.766563  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:54.766598  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:54.780581  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:54.780616  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:54.852112  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:57.352362  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:08:57.362457  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:08:57.362528  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:08:57.394226  181983 cri.go:89] found id: ""
	I1211 01:08:57.394248  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.394258  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:08:57.394264  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:08:57.394323  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:08:57.425294  181983 cri.go:89] found id: ""
	I1211 01:08:57.425327  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.425337  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:08:57.425343  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:08:57.425409  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:08:57.456886  181983 cri.go:89] found id: ""
	I1211 01:08:57.456912  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.456921  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:08:57.456927  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:08:57.456987  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:08:57.492960  181983 cri.go:89] found id: ""
	I1211 01:08:57.492987  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.492997  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:08:57.493003  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:08:57.493063  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:08:57.523244  181983 cri.go:89] found id: ""
	I1211 01:08:57.523271  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.523280  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:08:57.523287  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:08:57.523346  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:08:57.554095  181983 cri.go:89] found id: ""
	I1211 01:08:57.554167  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.554189  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:08:57.554215  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:08:57.554312  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:08:57.580363  181983 cri.go:89] found id: ""
	I1211 01:08:57.580436  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.580452  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:08:57.580459  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:08:57.580544  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:08:57.606107  181983 cri.go:89] found id: ""
	I1211 01:08:57.606141  181983 logs.go:282] 0 containers: []
	W1211 01:08:57.606150  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:08:57.606177  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:08:57.606197  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:08:57.620668  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:08:57.620699  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:08:57.688269  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:08:57.688288  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:08:57.688300  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:08:57.719805  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:08:57.719837  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:08:57.749983  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:08:57.750011  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:00.317623  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:00.352607  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:00.352684  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:00.394515  181983 cri.go:89] found id: ""
	I1211 01:09:00.394541  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.394551  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:00.394557  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:00.394632  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:00.439850  181983 cri.go:89] found id: ""
	I1211 01:09:00.439874  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.439884  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:00.439890  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:00.439961  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:00.476339  181983 cri.go:89] found id: ""
	I1211 01:09:00.476362  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.476371  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:00.476377  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:00.476440  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:00.510417  181983 cri.go:89] found id: ""
	I1211 01:09:00.510439  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.510448  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:00.510454  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:00.510516  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:00.541378  181983 cri.go:89] found id: ""
	I1211 01:09:00.541402  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.541410  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:00.541417  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:00.541479  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:00.571160  181983 cri.go:89] found id: ""
	I1211 01:09:00.571183  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.571192  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:00.571198  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:00.571263  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:00.603282  181983 cri.go:89] found id: ""
	I1211 01:09:00.603355  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.603388  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:00.603408  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:00.603502  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:00.632190  181983 cri.go:89] found id: ""
	I1211 01:09:00.632253  181983 logs.go:282] 0 containers: []
	W1211 01:09:00.632287  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:00.632312  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:00.632351  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:00.703193  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:00.703215  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:00.703229  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:00.734398  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:00.734431  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:00.764514  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:00.764546  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:00.832557  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:00.832594  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:03.349951  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:03.360150  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:03.360224  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:03.385325  181983 cri.go:89] found id: ""
	I1211 01:09:03.385348  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.385357  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:03.385363  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:03.385426  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:03.421551  181983 cri.go:89] found id: ""
	I1211 01:09:03.421579  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.421589  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:03.421613  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:03.421693  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:03.450157  181983 cri.go:89] found id: ""
	I1211 01:09:03.450183  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.450193  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:03.450218  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:03.450312  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:03.479026  181983 cri.go:89] found id: ""
	I1211 01:09:03.479051  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.479060  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:03.479067  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:03.479154  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:03.511537  181983 cri.go:89] found id: ""
	I1211 01:09:03.511560  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.511570  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:03.511577  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:03.511679  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:03.538961  181983 cri.go:89] found id: ""
	I1211 01:09:03.538994  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.539003  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:03.539010  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:03.539076  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:03.566051  181983 cri.go:89] found id: ""
	I1211 01:09:03.566076  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.566085  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:03.566091  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:03.566150  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:03.595639  181983 cri.go:89] found id: ""
	I1211 01:09:03.595664  181983 logs.go:282] 0 containers: []
	W1211 01:09:03.595673  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:03.595683  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:03.595694  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:03.629485  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:03.629509  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:03.700976  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:03.701014  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:03.715631  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:03.715661  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:03.784883  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:03.784903  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:03.784916  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:06.316330  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:06.329379  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:06.329473  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:06.358147  181983 cri.go:89] found id: ""
	I1211 01:09:06.358172  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.358181  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:06.358187  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:06.358250  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:06.382952  181983 cri.go:89] found id: ""
	I1211 01:09:06.382997  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.383007  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:06.383013  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:06.383072  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:06.415994  181983 cri.go:89] found id: ""
	I1211 01:09:06.416020  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.416030  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:06.416037  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:06.416098  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:06.449900  181983 cri.go:89] found id: ""
	I1211 01:09:06.449927  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.449936  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:06.449943  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:06.450004  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:06.487717  181983 cri.go:89] found id: ""
	I1211 01:09:06.487786  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.487802  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:06.487809  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:06.487873  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:06.521491  181983 cri.go:89] found id: ""
	I1211 01:09:06.521515  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.521524  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:06.521542  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:06.521607  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:06.551931  181983 cri.go:89] found id: ""
	I1211 01:09:06.551966  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.551976  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:06.551984  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:06.552056  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:06.578100  181983 cri.go:89] found id: ""
	I1211 01:09:06.578133  181983 logs.go:282] 0 containers: []
	W1211 01:09:06.578143  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:06.578152  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:06.578163  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:06.608766  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:06.608798  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:06.638118  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:06.638145  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:06.706441  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:06.706477  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:06.720974  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:06.721002  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:06.786350  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:09.286634  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:09.296626  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:09.296713  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:09.322272  181983 cri.go:89] found id: ""
	I1211 01:09:09.322296  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.322305  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:09.322311  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:09.322379  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:09.347705  181983 cri.go:89] found id: ""
	I1211 01:09:09.347727  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.347736  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:09.347742  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:09.347803  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:09.375267  181983 cri.go:89] found id: ""
	I1211 01:09:09.375290  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.375299  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:09.375305  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:09.375363  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:09.403707  181983 cri.go:89] found id: ""
	I1211 01:09:09.403729  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.403738  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:09.403744  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:09.403808  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:09.448627  181983 cri.go:89] found id: ""
	I1211 01:09:09.448650  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.448660  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:09.448666  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:09.448737  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:09.484253  181983 cri.go:89] found id: ""
	I1211 01:09:09.484277  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.484286  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:09.484292  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:09.484349  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:09.509658  181983 cri.go:89] found id: ""
	I1211 01:09:09.509681  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.509690  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:09.509696  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:09.509781  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:09.536000  181983 cri.go:89] found id: ""
	I1211 01:09:09.536026  181983 logs.go:282] 0 containers: []
	W1211 01:09:09.536035  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:09.536044  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:09.536056  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:09.603415  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:09.603452  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:09.618375  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:09.618403  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:09.685344  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:09.685408  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:09.685428  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:09.716883  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:09.716915  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:12.246098  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:12.256413  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:12.256485  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:12.282128  181983 cri.go:89] found id: ""
	I1211 01:09:12.282152  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.282160  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:12.282166  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:12.282224  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:12.311841  181983 cri.go:89] found id: ""
	I1211 01:09:12.311865  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.311874  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:12.311880  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:12.311942  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:12.340388  181983 cri.go:89] found id: ""
	I1211 01:09:12.340415  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.340425  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:12.340431  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:12.340506  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:12.366861  181983 cri.go:89] found id: ""
	I1211 01:09:12.366884  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.366893  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:12.366900  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:12.367024  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:12.392634  181983 cri.go:89] found id: ""
	I1211 01:09:12.392661  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.392670  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:12.392676  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:12.392757  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:12.421678  181983 cri.go:89] found id: ""
	I1211 01:09:12.421753  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.421776  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:12.421796  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:12.421894  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:12.451747  181983 cri.go:89] found id: ""
	I1211 01:09:12.451824  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.451847  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:12.451866  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:12.451958  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:12.485602  181983 cri.go:89] found id: ""
	I1211 01:09:12.485680  181983 logs.go:282] 0 containers: []
	W1211 01:09:12.485704  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:12.485727  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:12.485770  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:12.515606  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:12.515634  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:12.586802  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:12.586835  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:12.600952  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:12.600983  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:12.668418  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:12.668441  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:12.668459  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:15.199615  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:15.209782  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:15.209858  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:15.234845  181983 cri.go:89] found id: ""
	I1211 01:09:15.234870  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.234880  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:15.234886  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:15.234943  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:15.261223  181983 cri.go:89] found id: ""
	I1211 01:09:15.261247  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.261263  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:15.261271  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:15.261330  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:15.289677  181983 cri.go:89] found id: ""
	I1211 01:09:15.289702  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.289711  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:15.289718  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:15.289775  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:15.319766  181983 cri.go:89] found id: ""
	I1211 01:09:15.319788  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.319796  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:15.319803  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:15.319866  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:15.344475  181983 cri.go:89] found id: ""
	I1211 01:09:15.344497  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.344507  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:15.344513  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:15.344596  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:15.370095  181983 cri.go:89] found id: ""
	I1211 01:09:15.370124  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.370133  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:15.370140  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:15.370196  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:15.402057  181983 cri.go:89] found id: ""
	I1211 01:09:15.402083  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.402092  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:15.402099  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:15.402158  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:15.435992  181983 cri.go:89] found id: ""
	I1211 01:09:15.436016  181983 logs.go:282] 0 containers: []
	W1211 01:09:15.436026  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:15.436034  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:15.436045  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:15.520965  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:15.521004  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:15.536875  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:15.536903  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:15.601121  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:15.601140  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:15.601154  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:15.632066  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:15.632100  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:18.162100  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:18.172950  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:18.173018  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:18.206080  181983 cri.go:89] found id: ""
	I1211 01:09:18.206103  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.206111  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:18.206118  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:18.206174  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:18.234352  181983 cri.go:89] found id: ""
	I1211 01:09:18.234378  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.234388  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:18.234394  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:18.234453  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:18.261033  181983 cri.go:89] found id: ""
	I1211 01:09:18.261055  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.261065  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:18.261072  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:18.261131  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:18.293380  181983 cri.go:89] found id: ""
	I1211 01:09:18.293464  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.293495  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:18.293521  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:18.293652  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:18.320397  181983 cri.go:89] found id: ""
	I1211 01:09:18.320427  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.320443  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:18.320450  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:18.320554  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:18.345521  181983 cri.go:89] found id: ""
	I1211 01:09:18.345547  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.345556  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:18.345562  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:18.345622  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:18.376010  181983 cri.go:89] found id: ""
	I1211 01:09:18.376073  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.376088  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:18.376095  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:18.376153  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:18.402803  181983 cri.go:89] found id: ""
	I1211 01:09:18.402876  181983 logs.go:282] 0 containers: []
	W1211 01:09:18.402899  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:18.402925  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:18.402984  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:18.425994  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:18.426115  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:18.507260  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:18.507281  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:18.507296  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:18.537796  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:18.537831  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:18.568449  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:18.568476  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:21.139572  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:21.149649  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:21.149723  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:21.176881  181983 cri.go:89] found id: ""
	I1211 01:09:21.176903  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.176912  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:21.176918  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:21.176980  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:21.202620  181983 cri.go:89] found id: ""
	I1211 01:09:21.202648  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.202656  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:21.202662  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:21.202722  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:21.229094  181983 cri.go:89] found id: ""
	I1211 01:09:21.229121  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.229131  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:21.229138  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:21.229198  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:21.256324  181983 cri.go:89] found id: ""
	I1211 01:09:21.256346  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.256355  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:21.256363  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:21.256426  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:21.285939  181983 cri.go:89] found id: ""
	I1211 01:09:21.285966  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.285975  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:21.285983  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:21.286041  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:21.312586  181983 cri.go:89] found id: ""
	I1211 01:09:21.312609  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.312618  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:21.312625  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:21.312687  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:21.339958  181983 cri.go:89] found id: ""
	I1211 01:09:21.340037  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.340061  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:21.340081  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:21.340147  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:21.365429  181983 cri.go:89] found id: ""
	I1211 01:09:21.365455  181983 logs.go:282] 0 containers: []
	W1211 01:09:21.365464  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:21.365473  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:21.365488  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:21.437014  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:21.437095  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:21.452597  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:21.452681  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:21.529653  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:21.529729  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:21.529757  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:21.559946  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:21.559987  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:24.091608  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:24.102145  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:24.102221  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:24.129591  181983 cri.go:89] found id: ""
	I1211 01:09:24.129618  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.129627  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:24.129633  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:24.129692  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:24.160142  181983 cri.go:89] found id: ""
	I1211 01:09:24.160166  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.160176  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:24.160185  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:24.160244  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:24.189508  181983 cri.go:89] found id: ""
	I1211 01:09:24.189533  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.189542  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:24.189549  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:24.189605  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:24.215358  181983 cri.go:89] found id: ""
	I1211 01:09:24.215386  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.215395  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:24.215401  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:24.215461  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:24.240829  181983 cri.go:89] found id: ""
	I1211 01:09:24.240855  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.240864  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:24.240871  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:24.240935  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:24.266143  181983 cri.go:89] found id: ""
	I1211 01:09:24.266164  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.266173  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:24.266179  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:24.266237  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:24.291547  181983 cri.go:89] found id: ""
	I1211 01:09:24.291571  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.291581  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:24.291588  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:24.291646  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:24.318364  181983 cri.go:89] found id: ""
	I1211 01:09:24.318392  181983 logs.go:282] 0 containers: []
	W1211 01:09:24.318400  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:24.318410  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:24.318422  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:24.383295  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:24.383355  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:24.383376  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:24.414168  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:24.414199  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:24.454766  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:24.454835  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:24.530382  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:24.530421  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:27.044869  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:27.055042  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:27.055108  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:27.079795  181983 cri.go:89] found id: ""
	I1211 01:09:27.079820  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.079830  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:27.079836  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:27.079895  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:27.104806  181983 cri.go:89] found id: ""
	I1211 01:09:27.104829  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.104838  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:27.104844  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:27.104904  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:27.130276  181983 cri.go:89] found id: ""
	I1211 01:09:27.130298  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.130307  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:27.130313  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:27.130372  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:27.158310  181983 cri.go:89] found id: ""
	I1211 01:09:27.158332  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.158341  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:27.158347  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:27.158404  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:27.187613  181983 cri.go:89] found id: ""
	I1211 01:09:27.187639  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.187648  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:27.187654  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:27.187714  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:27.216862  181983 cri.go:89] found id: ""
	I1211 01:09:27.216886  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.216895  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:27.216902  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:27.216962  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:27.242576  181983 cri.go:89] found id: ""
	I1211 01:09:27.242603  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.242612  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:27.242619  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:27.242679  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:27.273042  181983 cri.go:89] found id: ""
	I1211 01:09:27.273068  181983 logs.go:282] 0 containers: []
	W1211 01:09:27.273077  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:27.273086  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:27.273098  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:27.340095  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:27.340134  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:27.354865  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:27.354895  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:27.426917  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:27.426938  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:27.426951  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:27.460885  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:27.460929  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:29.989741  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:30.003876  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:30.003957  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:30.068649  181983 cri.go:89] found id: ""
	I1211 01:09:30.068673  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.068687  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:30.068693  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:30.068760  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:30.113547  181983 cri.go:89] found id: ""
	I1211 01:09:30.113579  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.113589  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:30.113595  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:30.113662  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:30.141593  181983 cri.go:89] found id: ""
	I1211 01:09:30.141629  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.141643  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:30.141650  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:30.141711  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:30.169924  181983 cri.go:89] found id: ""
	I1211 01:09:30.169957  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.169967  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:30.169973  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:30.170033  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:30.196493  181983 cri.go:89] found id: ""
	I1211 01:09:30.196520  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.196531  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:30.196537  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:30.196601  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:30.223258  181983 cri.go:89] found id: ""
	I1211 01:09:30.223296  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.223306  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:30.223312  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:30.223390  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:30.249914  181983 cri.go:89] found id: ""
	I1211 01:09:30.249941  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.249951  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:30.249957  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:30.250021  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:30.276267  181983 cri.go:89] found id: ""
	I1211 01:09:30.276292  181983 logs.go:282] 0 containers: []
	W1211 01:09:30.276301  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:30.276311  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:30.276323  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:30.344125  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:30.344163  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:30.358590  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:30.358616  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:30.435137  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:30.435199  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:30.435228  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:30.472866  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:30.472898  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:33.010210  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:33.023732  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:33.023811  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:33.065092  181983 cri.go:89] found id: ""
	I1211 01:09:33.065111  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.065119  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:33.065125  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:33.065183  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:33.099050  181983 cri.go:89] found id: ""
	I1211 01:09:33.099079  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.099090  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:33.099096  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:33.099156  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:33.148680  181983 cri.go:89] found id: ""
	I1211 01:09:33.148710  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.148720  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:33.148725  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:33.148787  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:33.178533  181983 cri.go:89] found id: ""
	I1211 01:09:33.178558  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.178567  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:33.178573  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:33.178630  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:33.211602  181983 cri.go:89] found id: ""
	I1211 01:09:33.211630  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.211639  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:33.211645  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:33.211708  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:33.243538  181983 cri.go:89] found id: ""
	I1211 01:09:33.243565  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.243574  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:33.243580  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:33.243643  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:33.273828  181983 cri.go:89] found id: ""
	I1211 01:09:33.273851  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.273860  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:33.273866  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:33.273924  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:33.311655  181983 cri.go:89] found id: ""
	I1211 01:09:33.311676  181983 logs.go:282] 0 containers: []
	W1211 01:09:33.311686  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:33.311694  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:33.311705  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:33.390190  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:33.390259  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:33.429398  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:33.429474  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:33.556722  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:33.556744  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:33.556756  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:33.590119  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:33.590151  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:36.124553  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:36.134541  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:36.134608  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:36.164941  181983 cri.go:89] found id: ""
	I1211 01:09:36.164968  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.164978  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:36.164984  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:36.165043  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:36.210121  181983 cri.go:89] found id: ""
	I1211 01:09:36.210188  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.210213  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:36.210234  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:36.210316  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:36.242515  181983 cri.go:89] found id: ""
	I1211 01:09:36.242543  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.242552  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:36.242559  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:36.242619  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:36.273233  181983 cri.go:89] found id: ""
	I1211 01:09:36.273268  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.273277  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:36.273283  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:36.273353  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:36.309451  181983 cri.go:89] found id: ""
	I1211 01:09:36.309492  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.309501  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:36.309508  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:36.309581  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:36.341706  181983 cri.go:89] found id: ""
	I1211 01:09:36.341737  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.341755  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:36.341763  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:36.341832  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:36.385804  181983 cri.go:89] found id: ""
	I1211 01:09:36.385875  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.385898  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:36.385918  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:36.386020  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:36.422869  181983 cri.go:89] found id: ""
	I1211 01:09:36.422951  181983 logs.go:282] 0 containers: []
	W1211 01:09:36.423052  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:36.423081  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:36.423108  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:36.464792  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:36.464828  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:36.533444  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:36.533471  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:36.611525  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:36.611597  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:36.635198  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:36.635276  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:36.733391  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:39.233618  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:39.247250  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:39.247341  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:39.280807  181983 cri.go:89] found id: ""
	I1211 01:09:39.280835  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.280844  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:39.280850  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:39.280912  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:39.317410  181983 cri.go:89] found id: ""
	I1211 01:09:39.317452  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.317461  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:39.317472  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:39.317544  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:39.351466  181983 cri.go:89] found id: ""
	I1211 01:09:39.351506  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.351515  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:39.351522  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:39.351609  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:39.393869  181983 cri.go:89] found id: ""
	I1211 01:09:39.393898  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.393908  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:39.393915  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:39.393977  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:39.427109  181983 cri.go:89] found id: ""
	I1211 01:09:39.427138  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.427148  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:39.427155  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:39.427215  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:39.457165  181983 cri.go:89] found id: ""
	I1211 01:09:39.457193  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.457202  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:39.457208  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:39.457265  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:39.493568  181983 cri.go:89] found id: ""
	I1211 01:09:39.493596  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.493610  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:39.493617  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:39.493689  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:39.521037  181983 cri.go:89] found id: ""
	I1211 01:09:39.521061  181983 logs.go:282] 0 containers: []
	W1211 01:09:39.521069  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:39.521078  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:39.521089  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:39.588405  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:39.588462  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:39.604315  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:39.604391  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:39.687703  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:39.687774  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:39.687815  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:39.727654  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:39.727727  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:42.258669  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:42.270899  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:09:42.271026  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:09:42.300813  181983 cri.go:89] found id: ""
	I1211 01:09:42.300841  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.300852  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:09:42.300859  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:09:42.300932  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:09:42.335404  181983 cri.go:89] found id: ""
	I1211 01:09:42.335429  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.335439  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:09:42.335445  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:09:42.335509  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:09:42.365418  181983 cri.go:89] found id: ""
	I1211 01:09:42.365442  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.365450  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:09:42.365456  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:09:42.365522  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:09:42.391440  181983 cri.go:89] found id: ""
	I1211 01:09:42.391462  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.391471  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:09:42.391477  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:09:42.391536  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:09:42.421424  181983 cri.go:89] found id: ""
	I1211 01:09:42.421452  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.421462  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:09:42.421468  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:09:42.421530  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:09:42.449449  181983 cri.go:89] found id: ""
	I1211 01:09:42.449476  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.449485  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:09:42.449492  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:09:42.449552  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:09:42.479092  181983 cri.go:89] found id: ""
	I1211 01:09:42.479114  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.479124  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:09:42.479130  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:09:42.479189  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:09:42.509722  181983 cri.go:89] found id: ""
	I1211 01:09:42.509747  181983 logs.go:282] 0 containers: []
	W1211 01:09:42.509755  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:09:42.509764  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:09:42.509777  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:09:42.524339  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:09:42.524365  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:09:42.595084  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:09:42.595108  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:09:42.595121  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:09:42.626008  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:09:42.626039  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1211 01:09:42.660887  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:09:42.660917  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:09:45.240502  181983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:09:45.254284  181983 kubeadm.go:602] duration metric: took 4m4.397017076s to restartPrimaryControlPlane
	W1211 01:09:45.254359  181983 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1211 01:09:45.254426  181983 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 01:09:45.672570  181983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:09:45.685705  181983 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 01:09:45.693794  181983 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 01:09:45.693857  181983 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 01:09:45.701811  181983 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 01:09:45.701831  181983 kubeadm.go:158] found existing configuration files:
	
	I1211 01:09:45.701884  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 01:09:45.709894  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 01:09:45.709961  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 01:09:45.717911  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 01:09:45.725733  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 01:09:45.725804  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 01:09:45.733336  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 01:09:45.741065  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 01:09:45.741134  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 01:09:45.748920  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 01:09:45.756622  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 01:09:45.756693  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 01:09:45.764754  181983 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 01:09:45.801381  181983 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 01:09:45.801447  181983 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 01:09:45.873032  181983 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 01:09:45.873107  181983 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 01:09:45.873147  181983 kubeadm.go:319] OS: Linux
	I1211 01:09:45.873202  181983 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 01:09:45.873256  181983 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 01:09:45.873311  181983 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 01:09:45.873376  181983 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 01:09:45.873427  181983 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 01:09:45.873480  181983 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 01:09:45.873529  181983 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 01:09:45.873582  181983 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 01:09:45.873632  181983 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 01:09:45.943582  181983 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 01:09:45.943696  181983 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 01:09:45.943791  181983 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 01:09:45.959351  181983 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 01:09:45.963782  181983 out.go:252]   - Generating certificates and keys ...
	I1211 01:09:45.963882  181983 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 01:09:45.963946  181983 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 01:09:45.964022  181983 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 01:09:45.964082  181983 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 01:09:45.964152  181983 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 01:09:45.964212  181983 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 01:09:45.964276  181983 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 01:09:45.964337  181983 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 01:09:45.964410  181983 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 01:09:45.964482  181983 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 01:09:45.964520  181983 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 01:09:45.964576  181983 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 01:09:46.321616  181983 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 01:09:46.859333  181983 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 01:09:47.102951  181983 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 01:09:47.314489  181983 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 01:09:47.557663  181983 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 01:09:47.558414  181983 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 01:09:47.561433  181983 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 01:09:47.565308  181983 out.go:252]   - Booting up control plane ...
	I1211 01:09:47.565414  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 01:09:47.565503  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 01:09:47.565577  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 01:09:47.581682  181983 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 01:09:47.581798  181983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 01:09:47.589521  181983 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 01:09:47.590115  181983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 01:09:47.590303  181983 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 01:09:47.726389  181983 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 01:09:47.726525  181983 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 01:13:47.727387  181983 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001005206s
	I1211 01:13:47.727428  181983 kubeadm.go:319] 
	I1211 01:13:47.727488  181983 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 01:13:47.727528  181983 kubeadm.go:319] 	- The kubelet is not running
	I1211 01:13:47.727692  181983 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 01:13:47.727714  181983 kubeadm.go:319] 
	I1211 01:13:47.727820  181983 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 01:13:47.727853  181983 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 01:13:47.727895  181983 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 01:13:47.727902  181983 kubeadm.go:319] 
	I1211 01:13:47.731469  181983 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 01:13:47.731968  181983 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 01:13:47.732121  181983 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 01:13:47.732401  181983 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1211 01:13:47.732413  181983 kubeadm.go:319] 
	I1211 01:13:47.732483  181983 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1211 01:13:47.732617  181983 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001005206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001005206s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1211 01:13:47.732706  181983 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1211 01:13:48.139232  181983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:13:48.152968  181983 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 01:13:48.153040  181983 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 01:13:48.161323  181983 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 01:13:48.161342  181983 kubeadm.go:158] found existing configuration files:
	
	I1211 01:13:48.161397  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 01:13:48.169385  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 01:13:48.169455  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 01:13:48.177035  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 01:13:48.185371  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 01:13:48.185437  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 01:13:48.192892  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 01:13:48.200679  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 01:13:48.200746  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 01:13:48.208132  181983 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 01:13:48.216024  181983 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 01:13:48.216095  181983 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 01:13:48.224031  181983 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 01:13:48.378502  181983 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1211 01:13:48.378922  181983 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1211 01:13:48.467583  181983 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 01:17:49.607308  181983 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1211 01:17:49.607346  181983 kubeadm.go:319] 
	I1211 01:17:49.607417  181983 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 01:17:49.612027  181983 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 01:17:49.612100  181983 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 01:17:49.612195  181983 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 01:17:49.612255  181983 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 01:17:49.612295  181983 kubeadm.go:319] OS: Linux
	I1211 01:17:49.612344  181983 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 01:17:49.612396  181983 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 01:17:49.612448  181983 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 01:17:49.612500  181983 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 01:17:49.612552  181983 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 01:17:49.612606  181983 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 01:17:49.612655  181983 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 01:17:49.612707  181983 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 01:17:49.612758  181983 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 01:17:49.612833  181983 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 01:17:49.612942  181983 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 01:17:49.613038  181983 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 01:17:49.613108  181983 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 01:17:49.616510  181983 out.go:252]   - Generating certificates and keys ...
	I1211 01:17:49.616654  181983 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 01:17:49.616739  181983 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 01:17:49.616877  181983 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 01:17:49.616999  181983 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 01:17:49.617085  181983 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 01:17:49.617141  181983 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 01:17:49.617227  181983 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 01:17:49.617353  181983 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 01:17:49.617451  181983 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 01:17:49.617546  181983 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 01:17:49.617594  181983 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 01:17:49.617662  181983 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 01:17:49.617718  181983 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 01:17:49.617778  181983 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 01:17:49.617854  181983 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 01:17:49.617975  181983 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 01:17:49.618067  181983 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 01:17:49.618186  181983 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 01:17:49.618316  181983 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 01:17:49.623325  181983 out.go:252]   - Booting up control plane ...
	I1211 01:17:49.623439  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 01:17:49.623530  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 01:17:49.623618  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 01:17:49.623727  181983 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 01:17:49.623842  181983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 01:17:49.623979  181983 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 01:17:49.624136  181983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 01:17:49.624218  181983 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 01:17:49.624417  181983 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 01:17:49.624593  181983 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 01:17:49.624672  181983 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001287358s
	I1211 01:17:49.624677  181983 kubeadm.go:319] 
	I1211 01:17:49.624739  181983 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 01:17:49.624775  181983 kubeadm.go:319] 	- The kubelet is not running
	I1211 01:17:49.624886  181983 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 01:17:49.624890  181983 kubeadm.go:319] 
	I1211 01:17:49.625002  181983 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 01:17:49.625036  181983 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 01:17:49.625068  181983 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 01:17:49.625137  181983 kubeadm.go:403] duration metric: took 12m8.810656109s to StartCluster
	I1211 01:17:49.625183  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:17:49.625258  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:17:49.625487  181983 kubeadm.go:319] 
	I1211 01:17:49.665386  181983 cri.go:89] found id: ""
	I1211 01:17:49.665423  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.665457  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:17:49.665465  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:17:49.665550  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:17:49.694119  181983 cri.go:89] found id: ""
	I1211 01:17:49.694141  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.694149  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:17:49.694155  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:17:49.694216  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:17:49.740078  181983 cri.go:89] found id: ""
	I1211 01:17:49.740101  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.740109  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:17:49.740115  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:17:49.740173  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:17:49.768397  181983 cri.go:89] found id: ""
	I1211 01:17:49.768419  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.768427  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:17:49.768433  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:17:49.768497  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:17:49.802424  181983 cri.go:89] found id: ""
	I1211 01:17:49.802445  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.802454  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:17:49.802459  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:17:49.802518  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:17:49.829968  181983 cri.go:89] found id: ""
	I1211 01:17:49.829987  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.829995  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:17:49.830002  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:17:49.830050  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:17:49.856743  181983 cri.go:89] found id: ""
	I1211 01:17:49.856763  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.856771  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:17:49.856777  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:17:49.856828  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:17:49.883107  181983 cri.go:89] found id: ""
	I1211 01:17:49.883176  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.883198  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:17:49.883220  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:17:49.883267  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:17:49.960455  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:17:49.960532  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:17:49.974950  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:17:49.975035  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:17:50.069928  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:17:50.069947  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:17:50.069959  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:17:50.108875  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:17:50.108953  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1211 01:17:50.145362  181983 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1211 01:17:50.145464  181983 out.go:285] * 
	* 
	W1211 01:17:50.145560  181983 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 01:17:50.145616  181983 out.go:285] * 
	* 
	W1211 01:17:50.148467  181983 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 01:17:50.153595  181983 out.go:203] 
	W1211 01:17:50.156531  181983 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 01:17:50.156648  181983 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1211 01:17:50.156719  181983 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1211 01:17:50.159904  181983 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-174503 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-174503 version --output=json: exit status 1 (126.056679ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-11 01:17:51.047219428 +0000 UTC m=+5233.217668076
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-174503
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-174503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "366c078c702248852135b002bc9128d4a1835870f654018994e010c8c1935de5",
	        "Created": "2025-12-11T01:04:45.832552506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T01:05:24.13796155Z",
	            "FinishedAt": "2025-12-11T01:05:23.237236817Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/366c078c702248852135b002bc9128d4a1835870f654018994e010c8c1935de5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/366c078c702248852135b002bc9128d4a1835870f654018994e010c8c1935de5/hostname",
	        "HostsPath": "/var/lib/docker/containers/366c078c702248852135b002bc9128d4a1835870f654018994e010c8c1935de5/hosts",
	        "LogPath": "/var/lib/docker/containers/366c078c702248852135b002bc9128d4a1835870f654018994e010c8c1935de5/366c078c702248852135b002bc9128d4a1835870f654018994e010c8c1935de5-json.log",
	        "Name": "/kubernetes-upgrade-174503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-174503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-174503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "366c078c702248852135b002bc9128d4a1835870f654018994e010c8c1935de5",
	                "LowerDir": "/var/lib/docker/overlay2/b754786eeb16753529f55343a0f5b96ec73e708eab4d25a45a6b437efe2f0e03-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b754786eeb16753529f55343a0f5b96ec73e708eab4d25a45a6b437efe2f0e03/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b754786eeb16753529f55343a0f5b96ec73e708eab4d25a45a6b437efe2f0e03/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b754786eeb16753529f55343a0f5b96ec73e708eab4d25a45a6b437efe2f0e03/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-174503",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-174503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-174503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-174503",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-174503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f87569b513110707048754ec0c6814cb9457c7c6771bc31e9ac8b3b7fb991a22",
	            "SandboxKey": "/var/run/docker/netns/f87569b51311",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-174503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:93:ac:17:ec:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bc244d86d1bea1cd6dac037430f98b5da0eb0edd07b5bfbf6a77ffc888048a6c",
	                    "EndpointID": "4cc3cb1fb299e656dfd975d14df5b2e26534489380825952b8a0448804237887",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-174503",
	                        "366c078c7022"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-174503 -n kubernetes-upgrade-174503
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-174503 -n kubernetes-upgrade-174503: exit status 2 (443.012237ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-174503 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ delete  │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ ssh     │ -p NoKubernetes-899269 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │                     │
	│ stop    │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p missing-upgrade-724666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-724666    │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:05 UTC │
	│ ssh     │ -p NoKubernetes-899269 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │                     │
	│ delete  │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:05 UTC │
	│ stop    │ -p kubernetes-upgrade-174503                                                                                                                    │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:05 UTC │
	│ delete  │ -p missing-upgrade-724666                                                                                                                       │ missing-upgrade-724666    │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:05 UTC │
	│ start   │ -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │                     │
	│ start   │ -p stopped-upgrade-421398 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-421398    │ jenkins │ v1.35.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:06 UTC │
	│ stop    │ stopped-upgrade-421398 stop                                                                                                                     │ stopped-upgrade-421398    │ jenkins │ v1.35.0 │ 11 Dec 25 01:06 UTC │ 11 Dec 25 01:06 UTC │
	│ start   │ -p stopped-upgrade-421398 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-421398    │ jenkins │ v1.37.0 │ 11 Dec 25 01:06 UTC │ 11 Dec 25 01:10 UTC │
	│ delete  │ -p stopped-upgrade-421398                                                                                                                       │ stopped-upgrade-421398    │ jenkins │ v1.37.0 │ 11 Dec 25 01:10 UTC │ 11 Dec 25 01:10 UTC │
	│ start   │ -p running-upgrade-335241 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-335241    │ jenkins │ v1.35.0 │ 11 Dec 25 01:10 UTC │ 11 Dec 25 01:11 UTC │
	│ start   │ -p running-upgrade-335241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-335241    │ jenkins │ v1.37.0 │ 11 Dec 25 01:11 UTC │ 11 Dec 25 01:15 UTC │
	│ delete  │ -p running-upgrade-335241                                                                                                                       │ running-upgrade-335241    │ jenkins │ v1.37.0 │ 11 Dec 25 01:15 UTC │ 11 Dec 25 01:15 UTC │
	│ start   │ -p pause-906108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:15 UTC │ 11 Dec 25 01:16 UTC │
	│ start   │ -p pause-906108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:16 UTC │ 11 Dec 25 01:17 UTC │
	│ pause   │ -p pause-906108 --alsologtostderr -v=5                                                                                                          │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:17 UTC │                     │
	│ delete  │ -p pause-906108                                                                                                                                 │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:17 UTC │ 11 Dec 25 01:17 UTC │
	│ start   │ -p force-systemd-flag-097163 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-097163 │ jenkins │ v1.37.0 │ 11 Dec 25 01:17 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 01:17:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 01:17:32.297581  218469 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:17:32.297693  218469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:17:32.297704  218469 out.go:374] Setting ErrFile to fd 2...
	I1211 01:17:32.297709  218469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:17:32.297974  218469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:17:32.298380  218469 out.go:368] Setting JSON to false
	I1211 01:17:32.299307  218469 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5339,"bootTime":1765410514,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 01:17:32.299388  218469 start.go:143] virtualization:  
	I1211 01:17:32.302331  218469 out.go:179] * [force-systemd-flag-097163] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 01:17:32.305865  218469 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 01:17:32.305984  218469 notify.go:221] Checking for updates...
	I1211 01:17:32.311500  218469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 01:17:32.314262  218469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 01:17:32.317196  218469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 01:17:32.319953  218469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 01:17:32.322904  218469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 01:17:32.326185  218469 config.go:182] Loaded profile config "kubernetes-upgrade-174503": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 01:17:32.326330  218469 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 01:17:32.353096  218469 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 01:17:32.353222  218469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:17:32.414538  218469 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 01:17:32.404915154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:17:32.414639  218469 docker.go:319] overlay module found
	I1211 01:17:32.417747  218469 out.go:179] * Using the docker driver based on user configuration
	I1211 01:17:32.420473  218469 start.go:309] selected driver: docker
	I1211 01:17:32.420489  218469 start.go:927] validating driver "docker" against <nil>
	I1211 01:17:32.420503  218469 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 01:17:32.421210  218469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:17:32.473824  218469 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 01:17:32.464002912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:17:32.474019  218469 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1211 01:17:32.474301  218469 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 01:17:32.477355  218469 out.go:179] * Using Docker driver with root privileges
	I1211 01:17:32.480228  218469 cni.go:84] Creating CNI manager for ""
	I1211 01:17:32.480324  218469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:17:32.480337  218469 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 01:17:32.480418  218469 start.go:353] cluster config:
	{Name:force-systemd-flag-097163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-097163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:17:32.485511  218469 out.go:179] * Starting "force-systemd-flag-097163" primary control-plane node in "force-systemd-flag-097163" cluster
	I1211 01:17:32.488472  218469 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 01:17:32.491464  218469 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 01:17:32.494448  218469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 01:17:32.494555  218469 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1211 01:17:32.494567  218469 cache.go:65] Caching tarball of preloaded images
	I1211 01:17:32.494495  218469 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 01:17:32.494657  218469 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 01:17:32.494667  218469 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 01:17:32.494768  218469 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/config.json ...
	I1211 01:17:32.494784  218469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/config.json: {Name:mk427367a3db85c5f88e0237c9427fd99080d39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:32.515491  218469 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 01:17:32.515516  218469 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 01:17:32.515538  218469 cache.go:243] Successfully downloaded all kic artifacts
	I1211 01:17:32.515567  218469 start.go:360] acquireMachinesLock for force-systemd-flag-097163: {Name:mke52c5400cecf73ce77296944ff89d6103e9e43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 01:17:32.515692  218469 start.go:364] duration metric: took 96.041µs to acquireMachinesLock for "force-systemd-flag-097163"
	I1211 01:17:32.515722  218469 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-097163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-097163 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 01:17:32.515793  218469 start.go:125] createHost starting for "" (driver="docker")
	I1211 01:17:32.519220  218469 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1211 01:17:32.519464  218469 start.go:159] libmachine.API.Create for "force-systemd-flag-097163" (driver="docker")
	I1211 01:17:32.519504  218469 client.go:173] LocalClient.Create starting
	I1211 01:17:32.519567  218469 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem
	I1211 01:17:32.519606  218469 main.go:143] libmachine: Decoding PEM data...
	I1211 01:17:32.519626  218469 main.go:143] libmachine: Parsing certificate...
	I1211 01:17:32.519675  218469 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem
	I1211 01:17:32.519704  218469 main.go:143] libmachine: Decoding PEM data...
	I1211 01:17:32.519716  218469 main.go:143] libmachine: Parsing certificate...
	I1211 01:17:32.520100  218469 cli_runner.go:164] Run: docker network inspect force-systemd-flag-097163 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1211 01:17:32.535987  218469 cli_runner.go:211] docker network inspect force-systemd-flag-097163 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1211 01:17:32.536079  218469 network_create.go:284] running [docker network inspect force-systemd-flag-097163] to gather additional debugging logs...
	I1211 01:17:32.536105  218469 cli_runner.go:164] Run: docker network inspect force-systemd-flag-097163
	W1211 01:17:32.551922  218469 cli_runner.go:211] docker network inspect force-systemd-flag-097163 returned with exit code 1
	I1211 01:17:32.551954  218469 network_create.go:287] error running [docker network inspect force-systemd-flag-097163]: docker network inspect force-systemd-flag-097163: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-097163 not found
	I1211 01:17:32.551968  218469 network_create.go:289] output of [docker network inspect force-systemd-flag-097163]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-097163 not found
	
	** /stderr **
	I1211 01:17:32.552078  218469 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 01:17:32.568647  218469 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7dc124717d46 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:5d:ee:ae:c9:fd} reservation:<nil>}
	I1211 01:17:32.568954  218469 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-18b0dcd3ac8e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:3e:98:1f:6c:f4} reservation:<nil>}
	I1211 01:17:32.569228  218469 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4c40cafc121e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:fd:99:1c:e5:98} reservation:<nil>}
	I1211 01:17:32.569556  218469 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bc244d86d1be IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:a5:c7:9e:7e:e1} reservation:<nil>}
	I1211 01:17:32.569921  218469 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019eab80}
	I1211 01:17:32.569946  218469 network_create.go:124] attempt to create docker network force-systemd-flag-097163 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1211 01:17:32.570000  218469 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-097163 force-systemd-flag-097163
	I1211 01:17:32.628950  218469 network_create.go:108] docker network force-systemd-flag-097163 192.168.85.0/24 created
	I1211 01:17:32.628984  218469 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-097163" container
	I1211 01:17:32.629064  218469 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1211 01:17:32.645718  218469 cli_runner.go:164] Run: docker volume create force-systemd-flag-097163 --label name.minikube.sigs.k8s.io=force-systemd-flag-097163 --label created_by.minikube.sigs.k8s.io=true
	I1211 01:17:32.669351  218469 oci.go:103] Successfully created a docker volume force-systemd-flag-097163
	I1211 01:17:32.669442  218469 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-097163-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-097163 --entrypoint /usr/bin/test -v force-systemd-flag-097163:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1211 01:17:33.182045  218469 oci.go:107] Successfully prepared a docker volume force-systemd-flag-097163
	I1211 01:17:33.182131  218469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 01:17:33.182148  218469 kic.go:194] Starting extracting preloaded images to volume ...
	I1211 01:17:33.182230  218469 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-097163:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1211 01:17:37.244856  218469 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-097163:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.062584359s)
	I1211 01:17:37.244891  218469 kic.go:203] duration metric: took 4.06273947s to extract preloaded images to volume ...
	W1211 01:17:37.245044  218469 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1211 01:17:37.245152  218469 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1211 01:17:37.302876  218469 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-097163 --name force-systemd-flag-097163 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-097163 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-097163 --network force-systemd-flag-097163 --ip 192.168.85.2 --volume force-systemd-flag-097163:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1211 01:17:37.616418  218469 cli_runner.go:164] Run: docker container inspect force-systemd-flag-097163 --format={{.State.Running}}
	I1211 01:17:37.639875  218469 cli_runner.go:164] Run: docker container inspect force-systemd-flag-097163 --format={{.State.Status}}
	I1211 01:17:37.661005  218469 cli_runner.go:164] Run: docker exec force-systemd-flag-097163 stat /var/lib/dpkg/alternatives/iptables
	I1211 01:17:37.712208  218469 oci.go:144] the created container "force-systemd-flag-097163" has a running status.
	I1211 01:17:37.712235  218469 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa...
	I1211 01:17:38.348065  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1211 01:17:38.348124  218469 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1211 01:17:38.375338  218469 cli_runner.go:164] Run: docker container inspect force-systemd-flag-097163 --format={{.State.Status}}
	I1211 01:17:38.406708  218469 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1211 01:17:38.406727  218469 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-097163 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1211 01:17:38.469962  218469 cli_runner.go:164] Run: docker container inspect force-systemd-flag-097163 --format={{.State.Status}}
	I1211 01:17:38.499684  218469 machine.go:94] provisionDockerMachine start ...
	I1211 01:17:38.499786  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:38.519215  218469 main.go:143] libmachine: Using SSH client type: native
	I1211 01:17:38.519566  218469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1211 01:17:38.519575  218469 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 01:17:38.670388  218469 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-097163
	
	I1211 01:17:38.670414  218469 ubuntu.go:182] provisioning hostname "force-systemd-flag-097163"
	I1211 01:17:38.670493  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:38.688899  218469 main.go:143] libmachine: Using SSH client type: native
	I1211 01:17:38.689193  218469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1211 01:17:38.689204  218469 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-097163 && echo "force-systemd-flag-097163" | sudo tee /etc/hostname
	I1211 01:17:38.857257  218469 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-097163
	
	I1211 01:17:38.857358  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:38.876650  218469 main.go:143] libmachine: Using SSH client type: native
	I1211 01:17:38.876962  218469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1211 01:17:38.876986  218469 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-097163' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-097163/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-097163' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 01:17:39.039534  218469 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 01:17:39.039561  218469 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 01:17:39.039587  218469 ubuntu.go:190] setting up certificates
	I1211 01:17:39.039597  218469 provision.go:84] configureAuth start
	I1211 01:17:39.039667  218469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-097163
	I1211 01:17:39.057259  218469 provision.go:143] copyHostCerts
	I1211 01:17:39.057341  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 01:17:39.057375  218469 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 01:17:39.057389  218469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 01:17:39.057468  218469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 01:17:39.057564  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 01:17:39.057590  218469 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 01:17:39.057598  218469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 01:17:39.057636  218469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 01:17:39.057693  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 01:17:39.057714  218469 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 01:17:39.057722  218469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 01:17:39.057751  218469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 01:17:39.057821  218469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-097163 san=[127.0.0.1 192.168.85.2 force-systemd-flag-097163 localhost minikube]
	I1211 01:17:39.310717  218469 provision.go:177] copyRemoteCerts
	I1211 01:17:39.310782  218469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 01:17:39.310827  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:39.327249  218469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa Username:docker}
	I1211 01:17:39.430760  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 01:17:39.430825  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 01:17:39.449625  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 01:17:39.449725  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1211 01:17:39.468093  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 01:17:39.468196  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1211 01:17:39.485435  218469 provision.go:87] duration metric: took 445.809728ms to configureAuth
	I1211 01:17:39.485464  218469 ubuntu.go:206] setting minikube options for container-runtime
	I1211 01:17:39.485679  218469 config.go:182] Loaded profile config "force-systemd-flag-097163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:17:39.485788  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:39.503543  218469 main.go:143] libmachine: Using SSH client type: native
	I1211 01:17:39.503872  218469 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1211 01:17:39.503901  218469 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 01:17:39.824891  218469 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 01:17:39.824912  218469 machine.go:97] duration metric: took 1.325207863s to provisionDockerMachine
	I1211 01:17:39.824922  218469 client.go:176] duration metric: took 7.305411832s to LocalClient.Create
	I1211 01:17:39.824936  218469 start.go:167] duration metric: took 7.305474208s to libmachine.API.Create "force-systemd-flag-097163"
	I1211 01:17:39.824943  218469 start.go:293] postStartSetup for "force-systemd-flag-097163" (driver="docker")
	I1211 01:17:39.824952  218469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 01:17:39.825016  218469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 01:17:39.825067  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:39.842711  218469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa Username:docker}
	I1211 01:17:39.947044  218469 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 01:17:39.950269  218469 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 01:17:39.950308  218469 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 01:17:39.950319  218469 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 01:17:39.950374  218469 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 01:17:39.950457  218469 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 01:17:39.950468  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /etc/ssl/certs/48752.pem
	I1211 01:17:39.950575  218469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 01:17:39.957913  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:17:39.974766  218469 start.go:296] duration metric: took 149.808976ms for postStartSetup
	I1211 01:17:39.975158  218469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-097163
	I1211 01:17:39.993025  218469 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/config.json ...
	I1211 01:17:39.993323  218469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 01:17:39.993375  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:40.023428  218469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa Username:docker}
	I1211 01:17:40.128056  218469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 01:17:40.132870  218469 start.go:128] duration metric: took 7.617062356s to createHost
	I1211 01:17:40.132897  218469 start.go:83] releasing machines lock for "force-systemd-flag-097163", held for 7.617192375s
	I1211 01:17:40.132976  218469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-097163
	I1211 01:17:40.155822  218469 ssh_runner.go:195] Run: cat /version.json
	I1211 01:17:40.155880  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:40.156180  218469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 01:17:40.156230  218469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-097163
	I1211 01:17:40.176852  218469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa Username:docker}
	I1211 01:17:40.181037  218469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/force-systemd-flag-097163/id_rsa Username:docker}
	I1211 01:17:40.366530  218469 ssh_runner.go:195] Run: systemctl --version
	I1211 01:17:40.373070  218469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 01:17:40.417353  218469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 01:17:40.422231  218469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 01:17:40.422303  218469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 01:17:40.455019  218469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1211 01:17:40.455045  218469 start.go:496] detecting cgroup driver to use...
	I1211 01:17:40.455058  218469 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1211 01:17:40.455114  218469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 01:17:40.478903  218469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 01:17:40.492352  218469 docker.go:218] disabling cri-docker service (if available) ...
	I1211 01:17:40.492439  218469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 01:17:40.511380  218469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 01:17:40.530762  218469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 01:17:40.650773  218469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 01:17:40.782929  218469 docker.go:234] disabling docker service ...
	I1211 01:17:40.783030  218469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 01:17:40.805984  218469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 01:17:40.819572  218469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 01:17:40.939750  218469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 01:17:41.059943  218469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 01:17:41.072540  218469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 01:17:41.086004  218469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 01:17:41.086072  218469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:41.095108  218469 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1211 01:17:41.095209  218469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:41.105026  218469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:41.113847  218469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:41.122658  218469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 01:17:41.131094  218469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:41.139813  218469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:41.153959  218469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:41.163399  218469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 01:17:41.172684  218469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 01:17:41.180044  218469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:41.309737  218469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 01:17:41.495896  218469 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 01:17:41.495976  218469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 01:17:41.499673  218469 start.go:564] Will wait 60s for crictl version
	I1211 01:17:41.499741  218469 ssh_runner.go:195] Run: which crictl
	I1211 01:17:41.503050  218469 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 01:17:41.530447  218469 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 01:17:41.530530  218469 ssh_runner.go:195] Run: crio --version
	I1211 01:17:41.557815  218469 ssh_runner.go:195] Run: crio --version
	I1211 01:17:41.590951  218469 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1211 01:17:41.593836  218469 cli_runner.go:164] Run: docker network inspect force-systemd-flag-097163 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 01:17:41.610665  218469 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1211 01:17:41.614753  218469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 01:17:41.624957  218469 kubeadm.go:884] updating cluster {Name:force-systemd-flag-097163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-097163 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 01:17:41.625096  218469 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 01:17:41.625158  218469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:17:41.661449  218469 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 01:17:41.661474  218469 crio.go:433] Images already preloaded, skipping extraction
	I1211 01:17:41.661530  218469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:17:41.687422  218469 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 01:17:41.687444  218469 cache_images.go:86] Images are preloaded, skipping loading
	I1211 01:17:41.687452  218469 kubeadm.go:935] updating node { 192.168.85.2  8443 v1.34.2 crio true true} ...
	I1211 01:17:41.687541  218469 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-097163 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-097163 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 01:17:41.687621  218469 ssh_runner.go:195] Run: crio config
	I1211 01:17:41.758662  218469 cni.go:84] Creating CNI manager for ""
	I1211 01:17:41.758689  218469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:17:41.758732  218469 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 01:17:41.758760  218469 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-097163 NodeName:force-systemd-flag-097163 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 01:17:41.758943  218469 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-097163"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 01:17:41.759035  218469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 01:17:41.766602  218469 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 01:17:41.766667  218469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 01:17:41.773772  218469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1211 01:17:41.786069  218469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 01:17:41.798402  218469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1211 01:17:41.810850  218469 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1211 01:17:41.814420  218469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 01:17:41.824047  218469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:41.952416  218469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 01:17:41.971823  218469 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163 for IP: 192.168.85.2
	I1211 01:17:41.971841  218469 certs.go:195] generating shared ca certs ...
	I1211 01:17:41.971857  218469 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:41.971989  218469 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 01:17:41.972076  218469 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 01:17:41.972084  218469 certs.go:257] generating profile certs ...
	I1211 01:17:41.972136  218469 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/client.key
	I1211 01:17:41.972147  218469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/client.crt with IP's: []
	I1211 01:17:42.310888  218469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/client.crt ...
	I1211 01:17:42.310929  218469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/client.crt: {Name:mkeb560e220f64223e9b1f2ba8437e76357e7708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:42.311164  218469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/client.key ...
	I1211 01:17:42.311185  218469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/client.key: {Name:mke9309ed4b53d9f911d766119aceca30a4940b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:42.311289  218469 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.key.f39fea81
	I1211 01:17:42.311310  218469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.crt.f39fea81 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1211 01:17:42.410176  218469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.crt.f39fea81 ...
	I1211 01:17:42.410204  218469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.crt.f39fea81: {Name:mk1af2f3163d4ad03296036ae81007540cb8b06a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:42.410381  218469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.key.f39fea81 ...
	I1211 01:17:42.410398  218469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.key.f39fea81: {Name:mkeb99bd27024e05958cc45f21b487e2b62685b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:42.410521  218469 certs.go:382] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.crt.f39fea81 -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.crt
	I1211 01:17:42.410622  218469 certs.go:386] copying /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.key.f39fea81 -> /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.key
	I1211 01:17:42.410683  218469 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.key
	I1211 01:17:42.410702  218469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.crt with IP's: []
	I1211 01:17:42.554459  218469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.crt ...
	I1211 01:17:42.554491  218469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.crt: {Name:mkbaec5e1c4aeea285741feb908ed39f9a3882a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:42.554679  218469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.key ...
	I1211 01:17:42.554696  218469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.key: {Name:mk13b88c7ad0e0b8add912ab2350baf8ab496ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:42.554778  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 01:17:42.554804  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 01:17:42.554817  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 01:17:42.554832  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 01:17:42.554845  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 01:17:42.554860  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 01:17:42.554880  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 01:17:42.554897  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 01:17:42.554953  218469 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 01:17:42.555015  218469 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 01:17:42.555031  218469 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 01:17:42.555062  218469 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 01:17:42.555096  218469 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 01:17:42.555126  218469 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 01:17:42.555177  218469 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:17:42.555221  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem -> /usr/share/ca-certificates/4875.pem
	I1211 01:17:42.555240  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> /usr/share/ca-certificates/48752.pem
	I1211 01:17:42.555255  218469 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:42.555793  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 01:17:42.573561  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 01:17:42.592989  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 01:17:42.611080  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 01:17:42.629158  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1211 01:17:42.648037  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 01:17:42.722830  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 01:17:42.740654  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/force-systemd-flag-097163/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 01:17:42.759644  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 01:17:42.778100  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 01:17:42.796739  218469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 01:17:42.814954  218469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 01:17:42.828059  218469 ssh_runner.go:195] Run: openssl version
	I1211 01:17:42.834513  218469 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 01:17:42.842771  218469 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 01:17:42.850451  218469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 01:17:42.854416  218469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 01:17:42.854479  218469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 01:17:42.896837  218469 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 01:17:42.904660  218469 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4875.pem /etc/ssl/certs/51391683.0
	I1211 01:17:42.912209  218469 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 01:17:42.919767  218469 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 01:17:42.927241  218469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 01:17:42.931024  218469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 01:17:42.931135  218469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 01:17:42.972698  218469 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 01:17:42.980428  218469 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/48752.pem /etc/ssl/certs/3ec20f2e.0
	I1211 01:17:42.987748  218469 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:42.995325  218469 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 01:17:43.002735  218469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:43.007735  218469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:43.007872  218469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:43.050384  218469 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 01:17:43.058259  218469 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1211 01:17:43.065992  218469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 01:17:43.069841  218469 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 01:17:43.069924  218469 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-097163 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-097163 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:17:43.070017  218469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 01:17:43.070128  218469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 01:17:43.098486  218469 cri.go:89] found id: ""
	I1211 01:17:43.098612  218469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 01:17:43.106999  218469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 01:17:43.114862  218469 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1211 01:17:43.115033  218469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 01:17:43.123138  218469 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 01:17:43.123162  218469 kubeadm.go:158] found existing configuration files:
	
	I1211 01:17:43.123256  218469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 01:17:43.131098  218469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 01:17:43.131197  218469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 01:17:43.139098  218469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 01:17:43.146999  218469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 01:17:43.147117  218469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 01:17:43.154617  218469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 01:17:43.162351  218469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 01:17:43.162466  218469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 01:17:43.169975  218469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 01:17:43.177630  218469 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 01:17:43.177705  218469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 01:17:43.185755  218469 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1211 01:17:43.228201  218469 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1211 01:17:43.228491  218469 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 01:17:43.254318  218469 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 01:17:43.254442  218469 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 01:17:43.254503  218469 kubeadm.go:319] OS: Linux
	I1211 01:17:43.254591  218469 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 01:17:43.254679  218469 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 01:17:43.254757  218469 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 01:17:43.254834  218469 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 01:17:43.254916  218469 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 01:17:43.255030  218469 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 01:17:43.255112  218469 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 01:17:43.255188  218469 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 01:17:43.255270  218469 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 01:17:43.327329  218469 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 01:17:43.327499  218469 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 01:17:43.327628  218469 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 01:17:43.334343  218469 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 01:17:43.340389  218469 out.go:252]   - Generating certificates and keys ...
	I1211 01:17:43.340550  218469 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 01:17:43.340650  218469 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 01:17:43.683479  218469 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 01:17:44.404942  218469 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1211 01:17:45.077744  218469 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1211 01:17:45.600573  218469 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1211 01:17:46.030747  218469 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1211 01:17:46.031143  218469 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-097163 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1211 01:17:46.369205  218469 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1211 01:17:46.369386  218469 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-097163 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1211 01:17:46.900870  218469 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 01:17:49.607308  181983 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1211 01:17:49.607346  181983 kubeadm.go:319] 
	I1211 01:17:49.607417  181983 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1211 01:17:49.612027  181983 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1211 01:17:49.612100  181983 kubeadm.go:319] [preflight] Running pre-flight checks
	I1211 01:17:49.612195  181983 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1211 01:17:49.612255  181983 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1211 01:17:49.612295  181983 kubeadm.go:319] OS: Linux
	I1211 01:17:49.612344  181983 kubeadm.go:319] CGROUPS_CPU: enabled
	I1211 01:17:49.612396  181983 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1211 01:17:49.612448  181983 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1211 01:17:49.612500  181983 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1211 01:17:49.612552  181983 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1211 01:17:49.612606  181983 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1211 01:17:49.612655  181983 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1211 01:17:49.612707  181983 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1211 01:17:49.612758  181983 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1211 01:17:49.612833  181983 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 01:17:49.612942  181983 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 01:17:49.613038  181983 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 01:17:49.613108  181983 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 01:17:49.616510  181983 out.go:252]   - Generating certificates and keys ...
	I1211 01:17:49.616654  181983 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1211 01:17:49.616739  181983 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1211 01:17:49.616877  181983 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1211 01:17:49.616999  181983 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1211 01:17:49.617085  181983 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1211 01:17:49.617141  181983 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1211 01:17:49.617227  181983 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1211 01:17:49.617353  181983 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1211 01:17:49.617451  181983 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1211 01:17:49.617546  181983 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1211 01:17:49.617594  181983 kubeadm.go:319] [certs] Using the existing "sa" key
	I1211 01:17:49.617662  181983 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 01:17:49.617718  181983 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 01:17:49.617778  181983 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 01:17:49.617854  181983 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 01:17:49.617975  181983 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 01:17:49.618067  181983 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 01:17:49.618186  181983 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 01:17:49.618316  181983 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 01:17:49.623325  181983 out.go:252]   - Booting up control plane ...
	I1211 01:17:49.623439  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 01:17:49.623530  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 01:17:49.623618  181983 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 01:17:49.623727  181983 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 01:17:49.623842  181983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1211 01:17:49.623979  181983 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1211 01:17:49.624136  181983 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 01:17:49.624218  181983 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1211 01:17:49.624417  181983 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 01:17:49.624593  181983 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 01:17:49.624672  181983 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001287358s
	I1211 01:17:49.624677  181983 kubeadm.go:319] 
	I1211 01:17:49.624739  181983 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1211 01:17:49.624775  181983 kubeadm.go:319] 	- The kubelet is not running
	I1211 01:17:49.624886  181983 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1211 01:17:49.624890  181983 kubeadm.go:319] 
	I1211 01:17:49.625002  181983 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1211 01:17:49.625036  181983 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1211 01:17:49.625068  181983 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1211 01:17:49.625137  181983 kubeadm.go:403] duration metric: took 12m8.810656109s to StartCluster
	I1211 01:17:49.625183  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1211 01:17:49.625258  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1211 01:17:49.625487  181983 kubeadm.go:319] 
	I1211 01:17:49.665386  181983 cri.go:89] found id: ""
	I1211 01:17:49.665423  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.665457  181983 logs.go:284] No container was found matching "kube-apiserver"
	I1211 01:17:49.665465  181983 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1211 01:17:49.665550  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1211 01:17:49.694119  181983 cri.go:89] found id: ""
	I1211 01:17:49.694141  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.694149  181983 logs.go:284] No container was found matching "etcd"
	I1211 01:17:49.694155  181983 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1211 01:17:49.694216  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1211 01:17:49.740078  181983 cri.go:89] found id: ""
	I1211 01:17:49.740101  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.740109  181983 logs.go:284] No container was found matching "coredns"
	I1211 01:17:49.740115  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1211 01:17:49.740173  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1211 01:17:49.768397  181983 cri.go:89] found id: ""
	I1211 01:17:49.768419  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.768427  181983 logs.go:284] No container was found matching "kube-scheduler"
	I1211 01:17:49.768433  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1211 01:17:49.768497  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1211 01:17:49.802424  181983 cri.go:89] found id: ""
	I1211 01:17:49.802445  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.802454  181983 logs.go:284] No container was found matching "kube-proxy"
	I1211 01:17:49.802459  181983 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1211 01:17:49.802518  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1211 01:17:49.829968  181983 cri.go:89] found id: ""
	I1211 01:17:49.829987  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.829995  181983 logs.go:284] No container was found matching "kube-controller-manager"
	I1211 01:17:49.830002  181983 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1211 01:17:49.830050  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1211 01:17:49.856743  181983 cri.go:89] found id: ""
	I1211 01:17:49.856763  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.856771  181983 logs.go:284] No container was found matching "kindnet"
	I1211 01:17:49.856777  181983 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1211 01:17:49.856828  181983 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1211 01:17:49.883107  181983 cri.go:89] found id: ""
	I1211 01:17:49.883176  181983 logs.go:282] 0 containers: []
	W1211 01:17:49.883198  181983 logs.go:284] No container was found matching "storage-provisioner"
	I1211 01:17:49.883220  181983 logs.go:123] Gathering logs for kubelet ...
	I1211 01:17:49.883267  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1211 01:17:49.960455  181983 logs.go:123] Gathering logs for dmesg ...
	I1211 01:17:49.960532  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1211 01:17:49.974950  181983 logs.go:123] Gathering logs for describe nodes ...
	I1211 01:17:49.975035  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1211 01:17:50.069928  181983 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1211 01:17:50.069947  181983 logs.go:123] Gathering logs for CRI-O ...
	I1211 01:17:50.069959  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1211 01:17:50.108875  181983 logs.go:123] Gathering logs for container status ...
	I1211 01:17:50.108953  181983 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1211 01:17:50.145362  181983 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1211 01:17:50.145464  181983 out.go:285] * 
	W1211 01:17:50.145560  181983 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 01:17:50.145616  181983 out.go:285] * 
	W1211 01:17:50.148467  181983 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 01:17:50.153595  181983 out.go:203] 
	W1211 01:17:50.156531  181983 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001287358s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1211 01:17:50.156648  181983 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1211 01:17:50.156719  181983 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1211 01:17:50.159904  181983 out.go:203] 
	I1211 01:17:47.569007  218469 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 01:17:48.311974  218469 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1211 01:17:48.312338  218469 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 01:17:48.514879  218469 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 01:17:48.902242  218469 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 01:17:49.044367  218469 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 01:17:49.480317  218469 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 01:17:51.319984  218469 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 01:17:51.320086  218469 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 01:17:51.320330  218469 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.224088678Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.224249571Z" level=info msg="Starting seccomp notifier watcher"
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.224460285Z" level=info msg="Create NRI interface"
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.224687934Z" level=info msg="built-in NRI default validator is disabled"
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.224778994Z" level=info msg="runtime interface created"
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.224838604Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.22490422Z" level=info msg="runtime interface starting up..."
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.224959792Z" level=info msg="starting plugins..."
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.225037535Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 11 01:05:33 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:05:33.225165674Z" level=info msg="No systemd watchdog enabled"
	Dec 11 01:05:33 kubernetes-upgrade-174503 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 11 01:09:45 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:09:45.947644736Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=52baaf86-d17d-4c7b-bd68-1f4b917c9d21 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:09:45 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:09:45.948441487Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=60fa0f92-807d-4518-88de-547bb2e95797 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:09:45 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:09:45.948895017Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=d4db3146-248a-46c6-bbd7-58b35a567830 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:09:45 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:09:45.949330479Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b15d3990-f2a8-47c8-88dc-3aea5623fbc6 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:09:45 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:09:45.949822442Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=411253a3-503f-402a-96da-0d33aac6b025 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:09:45 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:09:45.950292957Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=16a5a26a-818c-44a2-a8c3-b0d451768e8f name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:09:45 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:09:45.950856846Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=b8ea9825-9f52-4f4f-9f8b-fa4c11abc59e name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:13:48 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:13:48.47602647Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=d438ff3b-b6fc-488d-aaab-4a9b9891a1a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:13:48 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:13:48.477328933Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=1f4a10dc-abc3-4ae1-af90-0af213651243 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:13:48 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:13:48.477816615Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=2881eec0-98fc-4e35-9ba5-920d6fd99ad1 name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:13:48 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:13:48.480485894Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=fec27884-232c-4c47-8a87-b72bbe3a3f8c name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:13:48 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:13:48.481016235Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=23fdfe7b-96b9-4874-8eb2-037d0f10703a name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:13:48 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:13:48.481521411Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=7c4a54e1-a768-4f0c-8372-a1a8235ea53a name=/runtime.v1.ImageService/ImageStatus
	Dec 11 01:13:48 kubernetes-upgrade-174503 crio[615]: time="2025-12-11T01:13:48.483077684Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=89039fe4-eee2-4959-873a-8b7d3f2a6dd2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.845979] overlayfs: idmapped layers are currently not supported
	[Dec11 00:41] overlayfs: idmapped layers are currently not supported
	[Dec11 00:42] overlayfs: idmapped layers are currently not supported
	[ +51.416292] overlayfs: idmapped layers are currently not supported
	[  +3.779669] overlayfs: idmapped layers are currently not supported
	[Dec11 00:43] overlayfs: idmapped layers are currently not supported
	[Dec11 00:44] overlayfs: idmapped layers are currently not supported
	[Dec11 00:45] overlayfs: idmapped layers are currently not supported
	[Dec11 00:50] overlayfs: idmapped layers are currently not supported
	[Dec11 00:51] overlayfs: idmapped layers are currently not supported
	[Dec11 00:52] overlayfs: idmapped layers are currently not supported
	[Dec11 00:53] overlayfs: idmapped layers are currently not supported
	[Dec11 00:54] overlayfs: idmapped layers are currently not supported
	[Dec11 00:56] overlayfs: idmapped layers are currently not supported
	[ +19.086026] overlayfs: idmapped layers are currently not supported
	[Dec11 00:57] overlayfs: idmapped layers are currently not supported
	[ +53.287901] overlayfs: idmapped layers are currently not supported
	[Dec11 00:58] overlayfs: idmapped layers are currently not supported
	[Dec11 00:59] overlayfs: idmapped layers are currently not supported
	[ +24.341266] overlayfs: idmapped layers are currently not supported
	[Dec11 01:00] overlayfs: idmapped layers are currently not supported
	[Dec11 01:01] overlayfs: idmapped layers are currently not supported
	[Dec11 01:03] overlayfs: idmapped layers are currently not supported
	[Dec11 01:05] overlayfs: idmapped layers are currently not supported
	[Dec11 01:15] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 01:17:52 up  1:29,  0 user,  load average: 2.02, 1.37, 1.62
	Linux kubernetes-upgrade-174503 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 11 01:17:49 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 01:17:50 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 11 01:17:50 kubernetes-upgrade-174503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 01:17:50 kubernetes-upgrade-174503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 01:17:50 kubernetes-upgrade-174503 kubelet[12343]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 01:17:50 kubernetes-upgrade-174503 kubelet[12343]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 01:17:50 kubernetes-upgrade-174503 kubelet[12343]: E1211 01:17:50.342130   12343 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 01:17:50 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 01:17:50 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 01:17:51 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 11 01:17:51 kubernetes-upgrade-174503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 01:17:51 kubernetes-upgrade-174503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 01:17:51 kubernetes-upgrade-174503 kubelet[12348]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 01:17:51 kubernetes-upgrade-174503 kubelet[12348]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 01:17:51 kubernetes-upgrade-174503 kubelet[12348]: E1211 01:17:51.304359   12348 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 01:17:51 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 01:17:51 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 11 01:17:52 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 11 01:17:52 kubernetes-upgrade-174503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 01:17:52 kubernetes-upgrade-174503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 11 01:17:52 kubernetes-upgrade-174503 kubelet[12436]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 01:17:52 kubernetes-upgrade-174503 kubelet[12436]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 11 01:17:52 kubernetes-upgrade-174503 kubelet[12436]: E1211 01:17:52.250387   12436 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 11 01:17:52 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 11 01:17:52 kubernetes-upgrade-174503 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-174503 -n kubernetes-upgrade-174503
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-174503 -n kubernetes-upgrade-174503: exit status 2 (439.136364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-174503" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-174503" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-174503
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-174503: (2.615961017s)
--- FAIL: TestKubernetesUpgrade (796.71s)

                                                
                                    
x
+
TestPause/serial/Pause (6.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-906108 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-906108 --alsologtostderr -v=5: exit status 80 (2.053201917s)

                                                
                                                
-- stdout --
	* Pausing node pause-906108 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 01:17:23.320582  217021 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:17:23.321419  217021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:17:23.321433  217021 out.go:374] Setting ErrFile to fd 2...
	I1211 01:17:23.321439  217021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:17:23.321698  217021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:17:23.321961  217021 out.go:368] Setting JSON to false
	I1211 01:17:23.321980  217021 mustload.go:66] Loading cluster: pause-906108
	I1211 01:17:23.322459  217021 config.go:182] Loaded profile config "pause-906108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:17:23.322910  217021 cli_runner.go:164] Run: docker container inspect pause-906108 --format={{.State.Status}}
	I1211 01:17:23.339873  217021 host.go:66] Checking if "pause-906108" exists ...
	I1211 01:17:23.340187  217021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:17:23.393939  217021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-11 01:17:23.384663472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:17:23.394656  217021 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-cidr-v6:fd00::1/64 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) ip-family:ipv4 iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-netw
ork:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text pod-cidr: pod-cidr-v6: ports:[] preload:%!s(bool=true) profile:pause-906108 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 service-cluster-ip-range-v6:fd00::/108 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: static-ipv6: subnet: subnet-v6: trace: user: uuid: vm:%!s(bool=false) vm-driv
er: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1211 01:17:23.397882  217021 out.go:179] * Pausing node pause-906108 ... 
	I1211 01:17:23.401615  217021 host.go:66] Checking if "pause-906108" exists ...
	I1211 01:17:23.401968  217021 ssh_runner.go:195] Run: systemctl --version
	I1211 01:17:23.402021  217021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:23.420533  217021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:23.525616  217021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:17:23.538263  217021 pause.go:52] kubelet running: true
	I1211 01:17:23.538337  217021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1211 01:17:23.762193  217021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1211 01:17:23.762277  217021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1211 01:17:23.830220  217021 cri.go:89] found id: "4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae"
	I1211 01:17:23.830245  217021 cri.go:89] found id: "fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438"
	I1211 01:17:23.830251  217021 cri.go:89] found id: "df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d"
	I1211 01:17:23.830255  217021 cri.go:89] found id: "1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017"
	I1211 01:17:23.830258  217021 cri.go:89] found id: "c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d"
	I1211 01:17:23.830262  217021 cri.go:89] found id: "09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd"
	I1211 01:17:23.830264  217021 cri.go:89] found id: "8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1"
	I1211 01:17:23.830268  217021 cri.go:89] found id: "fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233"
	I1211 01:17:23.830270  217021 cri.go:89] found id: "cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	I1211 01:17:23.830276  217021 cri.go:89] found id: "a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2"
	I1211 01:17:23.830279  217021 cri.go:89] found id: "dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f"
	I1211 01:17:23.830282  217021 cri.go:89] found id: "9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b"
	I1211 01:17:23.830285  217021 cri.go:89] found id: "91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489"
	I1211 01:17:23.830288  217021 cri.go:89] found id: "a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	I1211 01:17:23.830291  217021 cri.go:89] found id: ""
	I1211 01:17:23.830339  217021 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 01:17:23.840710  217021 retry.go:31] will retry after 146.522651ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T01:17:23Z" level=error msg="open /run/runc: no such file or directory"
	I1211 01:17:23.988093  217021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:17:24.000964  217021 pause.go:52] kubelet running: false
	I1211 01:17:24.001079  217021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1211 01:17:24.143591  217021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1211 01:17:24.143728  217021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1211 01:17:24.216609  217021 cri.go:89] found id: "4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae"
	I1211 01:17:24.216634  217021 cri.go:89] found id: "fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438"
	I1211 01:17:24.216639  217021 cri.go:89] found id: "df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d"
	I1211 01:17:24.216643  217021 cri.go:89] found id: "1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017"
	I1211 01:17:24.216647  217021 cri.go:89] found id: "c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d"
	I1211 01:17:24.216651  217021 cri.go:89] found id: "09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd"
	I1211 01:17:24.216660  217021 cri.go:89] found id: "8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1"
	I1211 01:17:24.216664  217021 cri.go:89] found id: "fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233"
	I1211 01:17:24.216703  217021 cri.go:89] found id: "cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	I1211 01:17:24.216710  217021 cri.go:89] found id: "a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2"
	I1211 01:17:24.216713  217021 cri.go:89] found id: "dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f"
	I1211 01:17:24.216716  217021 cri.go:89] found id: "9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b"
	I1211 01:17:24.216719  217021 cri.go:89] found id: "91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489"
	I1211 01:17:24.216722  217021 cri.go:89] found id: "a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	I1211 01:17:24.216726  217021 cri.go:89] found id: ""
	I1211 01:17:24.216785  217021 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 01:17:24.228131  217021 retry.go:31] will retry after 268.949505ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T01:17:24Z" level=error msg="open /run/runc: no such file or directory"
	I1211 01:17:24.497382  217021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:17:24.510806  217021 pause.go:52] kubelet running: false
	I1211 01:17:24.510912  217021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1211 01:17:24.661623  217021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1211 01:17:24.661711  217021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1211 01:17:24.730136  217021 cri.go:89] found id: "4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae"
	I1211 01:17:24.730164  217021 cri.go:89] found id: "fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438"
	I1211 01:17:24.730182  217021 cri.go:89] found id: "df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d"
	I1211 01:17:24.730186  217021 cri.go:89] found id: "1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017"
	I1211 01:17:24.730189  217021 cri.go:89] found id: "c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d"
	I1211 01:17:24.730193  217021 cri.go:89] found id: "09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd"
	I1211 01:17:24.730195  217021 cri.go:89] found id: "8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1"
	I1211 01:17:24.730199  217021 cri.go:89] found id: "fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233"
	I1211 01:17:24.730202  217021 cri.go:89] found id: "cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	I1211 01:17:24.730212  217021 cri.go:89] found id: "a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2"
	I1211 01:17:24.730218  217021 cri.go:89] found id: "dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f"
	I1211 01:17:24.730222  217021 cri.go:89] found id: "9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b"
	I1211 01:17:24.730228  217021 cri.go:89] found id: "91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489"
	I1211 01:17:24.730233  217021 cri.go:89] found id: "a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	I1211 01:17:24.730236  217021 cri.go:89] found id: ""
	I1211 01:17:24.730285  217021 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 01:17:24.741790  217021 retry.go:31] will retry after 322.805608ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T01:17:24Z" level=error msg="open /run/runc: no such file or directory"
	I1211 01:17:25.065369  217021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:17:25.078510  217021 pause.go:52] kubelet running: false
	I1211 01:17:25.078607  217021 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1211 01:17:25.230708  217021 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1211 01:17:25.230836  217021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1211 01:17:25.296716  217021 cri.go:89] found id: "4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae"
	I1211 01:17:25.296784  217021 cri.go:89] found id: "fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438"
	I1211 01:17:25.296802  217021 cri.go:89] found id: "df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d"
	I1211 01:17:25.296807  217021 cri.go:89] found id: "1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017"
	I1211 01:17:25.296811  217021 cri.go:89] found id: "c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d"
	I1211 01:17:25.296815  217021 cri.go:89] found id: "09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd"
	I1211 01:17:25.296818  217021 cri.go:89] found id: "8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1"
	I1211 01:17:25.296822  217021 cri.go:89] found id: "fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233"
	I1211 01:17:25.296825  217021 cri.go:89] found id: "cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	I1211 01:17:25.296842  217021 cri.go:89] found id: "a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2"
	I1211 01:17:25.296851  217021 cri.go:89] found id: "dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f"
	I1211 01:17:25.296862  217021 cri.go:89] found id: "9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b"
	I1211 01:17:25.296865  217021 cri.go:89] found id: "91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489"
	I1211 01:17:25.296868  217021 cri.go:89] found id: "a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	I1211 01:17:25.296880  217021 cri.go:89] found id: ""
	I1211 01:17:25.296963  217021 ssh_runner.go:195] Run: sudo runc list -f json
	I1211 01:17:25.311027  217021 out.go:203] 
	W1211 01:17:25.313976  217021 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T01:17:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T01:17:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1211 01:17:25.313996  217021 out.go:285] * 
	* 
	W1211 01:17:25.319296  217021 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1211 01:17:25.322307  217021 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-906108 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-906108
helpers_test.go:244: (dbg) docker inspect pause-906108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536",
	        "Created": "2025-12-11T01:15:41.335968704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T01:15:41.417666018Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/hosts",
	        "LogPath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536-json.log",
	        "Name": "/pause-906108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-906108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-906108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536",
	                "LowerDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-906108",
	                "Source": "/var/lib/docker/volumes/pause-906108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-906108",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-906108",
	                "name.minikube.sigs.k8s.io": "pause-906108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4829b21697011fd29eb4fbcca8d7c5e2dfbab21f4c9cfebb7683a32fdecc10a",
	            "SandboxKey": "/var/run/docker/netns/a4829b216970",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-906108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:7c:a9:b5:e4:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4286905c34e2d59ba56440d34ea995d51e26da2a42cbccea99ea6784d3815231",
	                    "EndpointID": "74d4b723024297c720109376c7334b6b7806e32a8f36d5d421d12a3f3dabb1ac",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-906108",
	                        "dc2c66c19aef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-906108 -n pause-906108
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-906108 -n pause-906108: exit status 2 (328.1912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-906108 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-906108 logs -n 25: (1.354289897s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-899269 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:03 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p missing-upgrade-724666 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-724666    │ jenkins │ v1.35.0 │ 11 Dec 25 01:03 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ delete  │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ ssh     │ -p NoKubernetes-899269 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │                     │
	│ stop    │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p missing-upgrade-724666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-724666    │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:05 UTC │
	│ ssh     │ -p NoKubernetes-899269 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │                     │
	│ delete  │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:05 UTC │
	│ stop    │ -p kubernetes-upgrade-174503                                                                                                                    │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:05 UTC │
	│ delete  │ -p missing-upgrade-724666                                                                                                                       │ missing-upgrade-724666    │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:05 UTC │
	│ start   │ -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │                     │
	│ start   │ -p stopped-upgrade-421398 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-421398    │ jenkins │ v1.35.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:06 UTC │
	│ stop    │ stopped-upgrade-421398 stop                                                                                                                     │ stopped-upgrade-421398    │ jenkins │ v1.35.0 │ 11 Dec 25 01:06 UTC │ 11 Dec 25 01:06 UTC │
	│ start   │ -p stopped-upgrade-421398 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-421398    │ jenkins │ v1.37.0 │ 11 Dec 25 01:06 UTC │ 11 Dec 25 01:10 UTC │
	│ delete  │ -p stopped-upgrade-421398                                                                                                                       │ stopped-upgrade-421398    │ jenkins │ v1.37.0 │ 11 Dec 25 01:10 UTC │ 11 Dec 25 01:10 UTC │
	│ start   │ -p running-upgrade-335241 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-335241    │ jenkins │ v1.35.0 │ 11 Dec 25 01:10 UTC │ 11 Dec 25 01:11 UTC │
	│ start   │ -p running-upgrade-335241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-335241    │ jenkins │ v1.37.0 │ 11 Dec 25 01:11 UTC │ 11 Dec 25 01:15 UTC │
	│ delete  │ -p running-upgrade-335241                                                                                                                       │ running-upgrade-335241    │ jenkins │ v1.37.0 │ 11 Dec 25 01:15 UTC │ 11 Dec 25 01:15 UTC │
	│ start   │ -p pause-906108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:15 UTC │ 11 Dec 25 01:16 UTC │
	│ start   │ -p pause-906108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:16 UTC │ 11 Dec 25 01:17 UTC │
	│ pause   │ -p pause-906108 --alsologtostderr -v=5                                                                                                          │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:17 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 01:16:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 01:16:55.295745  215720 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:16:55.295944  215720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:16:55.296203  215720 out.go:374] Setting ErrFile to fd 2...
	I1211 01:16:55.296415  215720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:16:55.296705  215720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:16:55.297098  215720 out.go:368] Setting JSON to false
	I1211 01:16:55.298058  215720 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5302,"bootTime":1765410514,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 01:16:55.298137  215720 start.go:143] virtualization:  
	I1211 01:16:55.301475  215720 out.go:179] * [pause-906108] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 01:16:55.305306  215720 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 01:16:55.305449  215720 notify.go:221] Checking for updates...
	I1211 01:16:55.311361  215720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 01:16:55.314358  215720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 01:16:55.317457  215720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 01:16:55.320343  215720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 01:16:55.323171  215720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 01:16:55.326459  215720 config.go:182] Loaded profile config "pause-906108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:16:55.327090  215720 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 01:16:55.360124  215720 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 01:16:55.360264  215720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:16:55.416956  215720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-11 01:16:55.40783222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:16:55.417065  215720 docker.go:319] overlay module found
	I1211 01:16:55.420219  215720 out.go:179] * Using the docker driver based on existing profile
	I1211 01:16:55.423043  215720 start.go:309] selected driver: docker
	I1211 01:16:55.423062  215720 start.go:927] validating driver "docker" against &{Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver
-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:16:55.423194  215720 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 01:16:55.423288  215720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:16:55.477292  215720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-11 01:16:55.468508545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:16:55.477704  215720 cni.go:84] Creating CNI manager for ""
	I1211 01:16:55.477765  215720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:16:55.477812  215720 start.go:353] cluster config:
	{Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fa
lse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:16:55.481056  215720 out.go:179] * Starting "pause-906108" primary control-plane node in "pause-906108" cluster
	I1211 01:16:55.483753  215720 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 01:16:55.486736  215720 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 01:16:55.489552  215720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 01:16:55.489815  215720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 01:16:55.489844  215720 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1211 01:16:55.489859  215720 cache.go:65] Caching tarball of preloaded images
	I1211 01:16:55.489928  215720 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 01:16:55.489937  215720 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 01:16:55.490067  215720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/config.json ...
	I1211 01:16:55.508254  215720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 01:16:55.508280  215720 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 01:16:55.508296  215720 cache.go:243] Successfully downloaded all kic artifacts
	I1211 01:16:55.508325  215720 start.go:360] acquireMachinesLock for pause-906108: {Name:mk59739559c15612c7a10bb76db9c6c6334a285d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 01:16:55.508387  215720 start.go:364] duration metric: took 34.691µs to acquireMachinesLock for "pause-906108"
	I1211 01:16:55.508411  215720 start.go:96] Skipping create...Using existing machine configuration
	I1211 01:16:55.508422  215720 fix.go:54] fixHost starting: 
	I1211 01:16:55.508685  215720 cli_runner.go:164] Run: docker container inspect pause-906108 --format={{.State.Status}}
	I1211 01:16:55.528536  215720 fix.go:112] recreateIfNeeded on pause-906108: state=Running err=<nil>
	W1211 01:16:55.528578  215720 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 01:16:55.531854  215720 out.go:252] * Updating the running docker "pause-906108" container ...
	I1211 01:16:55.531890  215720 machine.go:94] provisionDockerMachine start ...
	I1211 01:16:55.531966  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:55.549753  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:55.550079  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:55.550094  215720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 01:16:55.698717  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-906108
	
	I1211 01:16:55.698742  215720 ubuntu.go:182] provisioning hostname "pause-906108"
	I1211 01:16:55.698813  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:55.717883  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:55.718229  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:55.718246  215720 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-906108 && echo "pause-906108" | sudo tee /etc/hostname
	I1211 01:16:55.880385  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-906108
	
	I1211 01:16:55.880457  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:55.899069  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:55.899377  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:55.899392  215720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-906108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-906108/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-906108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 01:16:56.071708  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 01:16:56.071745  215720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 01:16:56.071769  215720 ubuntu.go:190] setting up certificates
	I1211 01:16:56.071779  215720 provision.go:84] configureAuth start
	I1211 01:16:56.071844  215720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-906108
	I1211 01:16:56.090515  215720 provision.go:143] copyHostCerts
	I1211 01:16:56.090608  215720 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 01:16:56.090618  215720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 01:16:56.090693  215720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 01:16:56.090811  215720 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 01:16:56.090817  215720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 01:16:56.090849  215720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 01:16:56.090902  215720 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 01:16:56.090906  215720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 01:16:56.090929  215720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 01:16:56.091017  215720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.pause-906108 san=[127.0.0.1 192.168.85.2 localhost minikube pause-906108]
	I1211 01:16:56.410820  215720 provision.go:177] copyRemoteCerts
	I1211 01:16:56.410885  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 01:16:56.410930  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:56.432528  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:16:56.538684  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 01:16:56.556855  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1211 01:16:56.574263  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 01:16:56.592013  215720 provision.go:87] duration metric: took 520.209998ms to configureAuth
	I1211 01:16:56.592039  215720 ubuntu.go:206] setting minikube options for container-runtime
	I1211 01:16:56.592311  215720 config.go:182] Loaded profile config "pause-906108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:16:56.592419  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:56.609658  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:56.609964  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:56.609986  215720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 01:17:02.061761  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 01:17:02.061790  215720 machine.go:97] duration metric: took 6.529892008s to provisionDockerMachine
	I1211 01:17:02.061803  215720 start.go:293] postStartSetup for "pause-906108" (driver="docker")
	I1211 01:17:02.061814  215720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 01:17:02.061897  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 01:17:02.061948  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.083797  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.196169  215720 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 01:17:02.200177  215720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 01:17:02.200213  215720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 01:17:02.200227  215720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 01:17:02.200296  215720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 01:17:02.200389  215720 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 01:17:02.200510  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 01:17:02.209568  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:17:02.232048  215720 start.go:296] duration metric: took 170.209609ms for postStartSetup
	I1211 01:17:02.232191  215720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 01:17:02.232239  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.251247  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.357773  215720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 01:17:02.363561  215720 fix.go:56] duration metric: took 6.85513206s for fixHost
	I1211 01:17:02.363589  215720 start.go:83] releasing machines lock for "pause-906108", held for 6.855187782s
	I1211 01:17:02.363678  215720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-906108
	I1211 01:17:02.384696  215720 ssh_runner.go:195] Run: cat /version.json
	I1211 01:17:02.384772  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.385690  215720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 01:17:02.385781  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.413230  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.417279  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.524172  215720 ssh_runner.go:195] Run: systemctl --version
	I1211 01:17:02.620040  215720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 01:17:02.680911  215720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 01:17:02.687185  215720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 01:17:02.687343  215720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 01:17:02.700286  215720 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 01:17:02.700358  215720 start.go:496] detecting cgroup driver to use...
	I1211 01:17:02.700407  215720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 01:17:02.700490  215720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 01:17:02.723472  215720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 01:17:02.740836  215720 docker.go:218] disabling cri-docker service (if available) ...
	I1211 01:17:02.740934  215720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 01:17:02.761175  215720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 01:17:02.776274  215720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 01:17:02.926123  215720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 01:17:03.080759  215720 docker.go:234] disabling docker service ...
	I1211 01:17:03.080854  215720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 01:17:03.098821  215720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 01:17:03.115480  215720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 01:17:03.259171  215720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 01:17:03.411361  215720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 01:17:03.428164  215720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 01:17:03.445255  215720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 01:17:03.445399  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.456154  215720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 01:17:03.456256  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.467382  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.480004  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.489793  215720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 01:17:03.500082  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.511151  215720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.522472  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.534212  215720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 01:17:03.543301  215720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 01:17:03.551779  215720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:03.700320  215720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 01:17:03.942757  215720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 01:17:03.942854  215720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 01:17:03.947462  215720 start.go:564] Will wait 60s for crictl version
	I1211 01:17:03.947532  215720 ssh_runner.go:195] Run: which crictl
	I1211 01:17:03.951917  215720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 01:17:03.980378  215720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 01:17:03.980477  215720 ssh_runner.go:195] Run: crio --version
	I1211 01:17:04.012707  215720 ssh_runner.go:195] Run: crio --version
	I1211 01:17:04.070542  215720 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1211 01:17:04.074338  215720 cli_runner.go:164] Run: docker network inspect pause-906108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 01:17:04.095585  215720 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1211 01:17:04.100487  215720 kubeadm.go:884] updating cluster {Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 01:17:04.100643  215720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 01:17:04.100700  215720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:17:04.142959  215720 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 01:17:04.143008  215720 crio.go:433] Images already preloaded, skipping extraction
	I1211 01:17:04.143074  215720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:17:04.175519  215720 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 01:17:04.175542  215720 cache_images.go:86] Images are preloaded, skipping loading
	I1211 01:17:04.175550  215720 kubeadm.go:935] updating node { 192.168.85.2  8443 v1.34.2 crio true true} ...
	I1211 01:17:04.175664  215720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-906108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 01:17:04.175772  215720 ssh_runner.go:195] Run: crio config
	I1211 01:17:04.243345  215720 cni.go:84] Creating CNI manager for ""
	I1211 01:17:04.243385  215720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:17:04.243417  215720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 01:17:04.243442  215720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-906108 NodeName:pause-906108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 01:17:04.243614  215720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-906108"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 01:17:04.243697  215720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 01:17:04.253095  215720 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 01:17:04.253244  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 01:17:04.262349  215720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1211 01:17:04.278639  215720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 01:17:04.295232  215720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1211 01:17:04.310317  215720 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1211 01:17:04.314866  215720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:04.514772  215720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 01:17:04.543020  215720 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108 for IP: 192.168.85.2
	I1211 01:17:04.543055  215720 certs.go:195] generating shared ca certs ...
	I1211 01:17:04.543087  215720 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:04.543265  215720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 01:17:04.543340  215720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 01:17:04.543356  215720 certs.go:257] generating profile certs ...
	I1211 01:17:04.543512  215720 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.key
	I1211 01:17:04.543599  215720 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/apiserver.key.520b1307
	I1211 01:17:04.543663  215720 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/proxy-client.key
	I1211 01:17:04.543815  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 01:17:04.543867  215720 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 01:17:04.543881  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 01:17:04.543918  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 01:17:04.543957  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 01:17:04.543987  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 01:17:04.544048  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:17:04.544792  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 01:17:04.574562  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 01:17:04.609768  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 01:17:04.654389  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 01:17:04.727758  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 01:17:04.778641  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 01:17:04.820874  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 01:17:04.854097  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 01:17:04.884578  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 01:17:04.944219  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 01:17:04.988947  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 01:17:05.024876  215720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 01:17:05.045324  215720 ssh_runner.go:195] Run: openssl version
	I1211 01:17:05.058139  215720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.067768  215720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 01:17:05.080781  215720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.088506  215720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.088604  215720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.149983  215720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 01:17:05.160134  215720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.170485  215720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 01:17:05.180464  215720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.186214  215720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.186297  215720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.235516  215720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 01:17:05.245543  215720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.255398  215720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 01:17:05.265364  215720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.270907  215720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.271060  215720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.317990  215720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 01:17:05.329647  215720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 01:17:05.336478  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 01:17:05.385326  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 01:17:05.431536  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 01:17:05.485492  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 01:17:05.544249  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 01:17:05.612935  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 01:17:05.700090  215720 kubeadm.go:401] StartCluster: {Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvi
dia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:17:05.700247  215720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 01:17:05.700339  215720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 01:17:05.777868  215720 cri.go:89] found id: "4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae"
	I1211 01:17:05.777916  215720 cri.go:89] found id: "fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438"
	I1211 01:17:05.777923  215720 cri.go:89] found id: "df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d"
	I1211 01:17:05.777927  215720 cri.go:89] found id: "1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017"
	I1211 01:17:05.777931  215720 cri.go:89] found id: "c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d"
	I1211 01:17:05.777947  215720 cri.go:89] found id: "09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd"
	I1211 01:17:05.777959  215720 cri.go:89] found id: "8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1"
	I1211 01:17:05.777966  215720 cri.go:89] found id: "fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233"
	I1211 01:17:05.777970  215720 cri.go:89] found id: "cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	I1211 01:17:05.777978  215720 cri.go:89] found id: "a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2"
	I1211 01:17:05.777986  215720 cri.go:89] found id: "dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f"
	I1211 01:17:05.777990  215720 cri.go:89] found id: "9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b"
	I1211 01:17:05.777994  215720 cri.go:89] found id: "91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489"
	I1211 01:17:05.777998  215720 cri.go:89] found id: "a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	I1211 01:17:05.778001  215720 cri.go:89] found id: ""
	I1211 01:17:05.778071  215720 ssh_runner.go:195] Run: sudo runc list -f json
	W1211 01:17:05.797915  215720 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T01:17:05Z" level=error msg="open /run/runc: no such file or directory"
	I1211 01:17:05.798025  215720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 01:17:05.812393  215720 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 01:17:05.812434  215720 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 01:17:05.812503  215720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 01:17:05.822507  215720 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 01:17:05.823317  215720 kubeconfig.go:125] found "pause-906108" server: "https://192.168.85.2:8443"
	I1211 01:17:05.826541  215720 kapi.go:59] client config for pause-906108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 01:17:05.827604  215720 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 01:17:05.827639  215720 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 01:17:05.827646  215720 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 01:17:05.827664  215720 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 01:17:05.827775  215720 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 01:17:05.828410  215720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 01:17:05.850454  215720 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1211 01:17:05.850493  215720 kubeadm.go:602] duration metric: took 38.051478ms to restartPrimaryControlPlane
	I1211 01:17:05.850504  215720 kubeadm.go:403] duration metric: took 150.423895ms to StartCluster
	I1211 01:17:05.850522  215720 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:05.850595  215720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 01:17:05.851522  215720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:05.851807  215720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 01:17:05.852143  215720 config.go:182] Loaded profile config "pause-906108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:17:05.852454  215720 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 01:17:05.855385  215720 out.go:179] * Verifying Kubernetes components...
	I1211 01:17:05.857264  215720 out.go:179] * Enabled addons: 
	I1211 01:17:05.859319  215720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:05.861126  215720 addons.go:530] duration metric: took 8.673151ms for enable addons: enabled=[]
	I1211 01:17:06.098574  215720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 01:17:06.117657  215720 node_ready.go:35] waiting up to 6m0s for node "pause-906108" to be "Ready" ...
	I1211 01:17:09.563158  215720 node_ready.go:49] node "pause-906108" is "Ready"
	I1211 01:17:09.563184  215720 node_ready.go:38] duration metric: took 3.445495835s for node "pause-906108" to be "Ready" ...
	I1211 01:17:09.563198  215720 api_server.go:52] waiting for apiserver process to appear ...
	I1211 01:17:09.563260  215720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:17:09.583554  215720 api_server.go:72] duration metric: took 3.731705299s to wait for apiserver process to appear ...
	I1211 01:17:09.583576  215720 api_server.go:88] waiting for apiserver healthz status ...
	I1211 01:17:09.583594  215720 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1211 01:17:09.649813  215720 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1211 01:17:09.649881  215720 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1211 01:17:10.084554  215720 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1211 01:17:10.092844  215720 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1211 01:17:10.092914  215720 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1211 01:17:10.584652  215720 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1211 01:17:10.592732  215720 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1211 01:17:10.594094  215720 api_server.go:141] control plane version: v1.34.2
	I1211 01:17:10.594126  215720 api_server.go:131] duration metric: took 1.010542684s to wait for apiserver health ...
	I1211 01:17:10.594136  215720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 01:17:10.597318  215720 system_pods.go:59] 7 kube-system pods found
	I1211 01:17:10.597368  215720 system_pods.go:61] "coredns-66bc5c9577-qrtg8" [a76580f4-b7b6-41c6-848e-47f2bd78b1a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 01:17:10.597378  215720 system_pods.go:61] "etcd-pause-906108" [d0ddab26-b8d9-4436-9c69-fd89c373c180] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1211 01:17:10.597384  215720 system_pods.go:61] "kindnet-h5z5t" [bb2b435e-fada-4f6d-8cc1-44fd7cfca57a] Running
	I1211 01:17:10.597391  215720 system_pods.go:61] "kube-apiserver-pause-906108" [62c83569-e6b3-4f43-b3c2-2b7f703fcf9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1211 01:17:10.597403  215720 system_pods.go:61] "kube-controller-manager-pause-906108" [a3a4095f-c17f-4fb6-b53d-2e932feb6cca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1211 01:17:10.597411  215720 system_pods.go:61] "kube-proxy-4mgks" [9a6ccf72-6f7d-4c2d-bd59-6251e435d675] Running
	I1211 01:17:10.597417  215720 system_pods.go:61] "kube-scheduler-pause-906108" [14c8467d-2742-4ef4-a0d7-d516d80f9913] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1211 01:17:10.597423  215720 system_pods.go:74] duration metric: took 3.281114ms to wait for pod list to return data ...
	I1211 01:17:10.597434  215720 default_sa.go:34] waiting for default service account to be created ...
	I1211 01:17:10.600070  215720 default_sa.go:45] found service account: "default"
	I1211 01:17:10.600143  215720 default_sa.go:55] duration metric: took 2.702541ms for default service account to be created ...
	I1211 01:17:10.600159  215720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 01:17:10.604777  215720 system_pods.go:86] 7 kube-system pods found
	I1211 01:17:10.604814  215720 system_pods.go:89] "coredns-66bc5c9577-qrtg8" [a76580f4-b7b6-41c6-848e-47f2bd78b1a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 01:17:10.604827  215720 system_pods.go:89] "etcd-pause-906108" [d0ddab26-b8d9-4436-9c69-fd89c373c180] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1211 01:17:10.604833  215720 system_pods.go:89] "kindnet-h5z5t" [bb2b435e-fada-4f6d-8cc1-44fd7cfca57a] Running
	I1211 01:17:10.604841  215720 system_pods.go:89] "kube-apiserver-pause-906108" [62c83569-e6b3-4f43-b3c2-2b7f703fcf9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1211 01:17:10.604849  215720 system_pods.go:89] "kube-controller-manager-pause-906108" [a3a4095f-c17f-4fb6-b53d-2e932feb6cca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1211 01:17:10.604857  215720 system_pods.go:89] "kube-proxy-4mgks" [9a6ccf72-6f7d-4c2d-bd59-6251e435d675] Running
	I1211 01:17:10.604865  215720 system_pods.go:89] "kube-scheduler-pause-906108" [14c8467d-2742-4ef4-a0d7-d516d80f9913] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1211 01:17:10.604878  215720 system_pods.go:126] duration metric: took 4.712681ms to wait for k8s-apps to be running ...
	I1211 01:17:10.604887  215720 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 01:17:10.604951  215720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:17:10.627406  215720 system_svc.go:56] duration metric: took 22.510351ms WaitForService to wait for kubelet
	I1211 01:17:10.627433  215720 kubeadm.go:587] duration metric: took 4.775588703s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 01:17:10.627453  215720 node_conditions.go:102] verifying NodePressure condition ...
	I1211 01:17:10.646019  215720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1211 01:17:10.646048  215720 node_conditions.go:123] node cpu capacity is 2
	I1211 01:17:10.646061  215720 node_conditions.go:105] duration metric: took 18.603379ms to run NodePressure ...
	I1211 01:17:10.646074  215720 start.go:242] waiting for startup goroutines ...
	I1211 01:17:10.646081  215720 start.go:247] waiting for cluster config update ...
	I1211 01:17:10.646090  215720 start.go:256] writing updated cluster config ...
	I1211 01:17:10.646403  215720 ssh_runner.go:195] Run: rm -f paused
	I1211 01:17:10.650021  215720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 01:17:10.650746  215720 kapi.go:59] client config for pause-906108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 01:17:10.658801  215720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qrtg8" in "kube-system" namespace to be "Ready" or be gone ...
	W1211 01:17:12.664611  215720 pod_ready.go:104] pod "coredns-66bc5c9577-qrtg8" is not "Ready", error: <nil>
	W1211 01:17:14.666764  215720 pod_ready.go:104] pod "coredns-66bc5c9577-qrtg8" is not "Ready", error: <nil>
	I1211 01:17:16.666190  215720 pod_ready.go:94] pod "coredns-66bc5c9577-qrtg8" is "Ready"
	I1211 01:17:16.666222  215720 pod_ready.go:86] duration metric: took 6.007344743s for pod "coredns-66bc5c9577-qrtg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:16.672412  215720 pod_ready.go:83] waiting for pod "etcd-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	W1211 01:17:18.677969  215720 pod_ready.go:104] pod "etcd-pause-906108" is not "Ready", error: <nil>
	W1211 01:17:20.678223  215720 pod_ready.go:104] pod "etcd-pause-906108" is not "Ready", error: <nil>
	I1211 01:17:21.177359  215720 pod_ready.go:94] pod "etcd-pause-906108" is "Ready"
	I1211 01:17:21.177389  215720 pod_ready.go:86] duration metric: took 4.504947347s for pod "etcd-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:21.179761  215720 pod_ready.go:83] waiting for pod "kube-apiserver-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:21.184373  215720 pod_ready.go:94] pod "kube-apiserver-pause-906108" is "Ready"
	I1211 01:17:21.184396  215720 pod_ready.go:86] duration metric: took 4.608492ms for pod "kube-apiserver-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:21.186613  215720 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.692047  215720 pod_ready.go:94] pod "kube-controller-manager-pause-906108" is "Ready"
	I1211 01:17:22.692076  215720 pod_ready.go:86] duration metric: took 1.505439195s for pod "kube-controller-manager-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.694082  215720 pod_ready.go:83] waiting for pod "kube-proxy-4mgks" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.698001  215720 pod_ready.go:94] pod "kube-proxy-4mgks" is "Ready"
	I1211 01:17:22.698025  215720 pod_ready.go:86] duration metric: took 3.916795ms for pod "kube-proxy-4mgks" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.775717  215720 pod_ready.go:83] waiting for pod "kube-scheduler-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:23.176115  215720 pod_ready.go:94] pod "kube-scheduler-pause-906108" is "Ready"
	I1211 01:17:23.176145  215720 pod_ready.go:86] duration metric: took 400.402204ms for pod "kube-scheduler-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:23.176158  215720 pod_ready.go:40] duration metric: took 12.526059058s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 01:17:23.231645  215720 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1211 01:17:23.235297  215720 out.go:179] * Done! kubectl is now configured to use "pause-906108" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.662674485Z" level=info msg="Creating container: kube-system/kube-proxy-4mgks/kube-proxy" id=dcb803c5-b04f-422c-9091-2c767d8d58ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.670506243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.681862334Z" level=info msg="Created container df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d: kube-system/etcd-pause-906108/etcd" id=c2dad993-771e-4c1c-84f2-0c64e06d5d9f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.68259833Z" level=info msg="Starting container: df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d" id=7dccae40-99e5-42db-bebf-d0e58738b1e1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.703644629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.704225934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.713798545Z" level=info msg="Started container" PID=2237 containerID=df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d description=kube-system/etcd-pause-906108/etcd id=7dccae40-99e5-42db-bebf-d0e58738b1e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a953e50a768c88584be1b577bc0d9157848ba25ebba8bf55c76bad24776d8e9
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.756632177Z" level=info msg="Created container fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438: kube-system/kube-scheduler-pause-906108/kube-scheduler" id=43777fdf-692b-4e31-a90d-d54fc7ad0613 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.759766098Z" level=info msg="Starting container: fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438" id=2dba024d-b8e5-4050-a9f1-dd7093eff74a name=/runtime.v1.RuntimeService/StartContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.763282807Z" level=info msg="Started container" PID=2258 containerID=fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438 description=kube-system/kube-scheduler-pause-906108/kube-scheduler id=2dba024d-b8e5-4050-a9f1-dd7093eff74a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f678fd8871f070c0c788d8295cb6965fa502f2a44e05564ff799a41e8a1b6a07
	Dec 11 01:17:05 pause-906108 crio[2050]: time="2025-12-11T01:17:05.103436752Z" level=info msg="Created container 4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae: kube-system/kube-proxy-4mgks/kube-proxy" id=dcb803c5-b04f-422c-9091-2c767d8d58ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:05 pause-906108 crio[2050]: time="2025-12-11T01:17:05.104282436Z" level=info msg="Starting container: 4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae" id=fc3d755a-99ac-4173-8314-92623f9c3dd6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 11 01:17:05 pause-906108 crio[2050]: time="2025-12-11T01:17:05.108425578Z" level=info msg="Started container" PID=2272 containerID=4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae description=kube-system/kube-proxy-4mgks/kube-proxy id=fc3d755a-99ac-4173-8314-92623f9c3dd6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1aeea51ab827ec1b70e3f2920d7537ef0bd6d92706762bf0355842159be92352
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.968596546Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.972217043Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.972251513Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.972274923Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.976000216Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.976049111Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.976069214Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.979431928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.979466472Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.979489709Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.982734163Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.982897635Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4a34a124a3f7d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   1aeea51ab827e       kube-proxy-4mgks                       kube-system
	fcc376b66efd9       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   21 seconds ago       Running             kube-scheduler            1                   f678fd8871f07       kube-scheduler-pause-906108            kube-system
	df65390879ea7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   4a953e50a768c       etcd-pause-906108                      kube-system
	1992792c47a16       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   7bf03e7d0de1f       kube-apiserver-pause-906108            kube-system
	c21800dea36b1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   9019270d447c6       coredns-66bc5c9577-qrtg8               kube-system
	09093121fe417       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   25d083c844afe       kube-controller-manager-pause-906108   kube-system
	8b74cb1d38f11       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   00e03dc535245       kindnet-h5z5t                          kube-system
	fc6d16835668f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   33 seconds ago       Exited              coredns                   0                   9019270d447c6       coredns-66bc5c9577-qrtg8               kube-system
	cd13342125fd1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   1aeea51ab827e       kube-proxy-4mgks                       kube-system
	a9a4d4edb8cb9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   00e03dc535245       kindnet-h5z5t                          kube-system
	dc18fcddd44f0       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   7bf03e7d0de1f       kube-apiserver-pause-906108            kube-system
	9170d36b0b6a8       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   4a953e50a768c       etcd-pause-906108                      kube-system
	91153c0ce99a5       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   25d083c844afe       kube-controller-manager-pause-906108   kube-system
	a1c6a7de725aa       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   f678fd8871f07       kube-scheduler-pause-906108            kube-system
	
	
	==> coredns [c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46510 - 42823 "HINFO IN 3458944203887410954.8276805735454187518. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041293906s
	
	
	==> coredns [fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60910 - 56665 "HINFO IN 3401748386353232504.324323302585184621. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.038281407s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-906108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-906108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=pause-906108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T01_16_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 01:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-906108
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 11 Dec 2025 01:17:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:15:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:15:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:15:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:16:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-906108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                51ae9a37-8fb0-44d0-8efd-661f74472e17
	  Boot ID:                    0edab61d-52b1-4525-85dd-848bc0b1d36e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-qrtg8                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-906108                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         80s
	  kube-system                 kindnet-h5z5t                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-pause-906108             250m (12%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-pause-906108    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-4mgks                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-906108             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-906108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-906108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-906108 status is now: NodeHasSufficientPID
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 80s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-906108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-906108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-906108 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-906108 event: Registered Node pause-906108 in Controller
	  Normal   NodeReady                34s                kubelet          Node pause-906108 status is now: NodeReady
	  Normal   RegisteredNode           14s                node-controller  Node pause-906108 event: Registered Node pause-906108 in Controller
	
	
	==> dmesg <==
	[  +3.845979] overlayfs: idmapped layers are currently not supported
	[Dec11 00:41] overlayfs: idmapped layers are currently not supported
	[Dec11 00:42] overlayfs: idmapped layers are currently not supported
	[ +51.416292] overlayfs: idmapped layers are currently not supported
	[  +3.779669] overlayfs: idmapped layers are currently not supported
	[Dec11 00:43] overlayfs: idmapped layers are currently not supported
	[Dec11 00:44] overlayfs: idmapped layers are currently not supported
	[Dec11 00:45] overlayfs: idmapped layers are currently not supported
	[Dec11 00:50] overlayfs: idmapped layers are currently not supported
	[Dec11 00:51] overlayfs: idmapped layers are currently not supported
	[Dec11 00:52] overlayfs: idmapped layers are currently not supported
	[Dec11 00:53] overlayfs: idmapped layers are currently not supported
	[Dec11 00:54] overlayfs: idmapped layers are currently not supported
	[Dec11 00:56] overlayfs: idmapped layers are currently not supported
	[ +19.086026] overlayfs: idmapped layers are currently not supported
	[Dec11 00:57] overlayfs: idmapped layers are currently not supported
	[ +53.287901] overlayfs: idmapped layers are currently not supported
	[Dec11 00:58] overlayfs: idmapped layers are currently not supported
	[Dec11 00:59] overlayfs: idmapped layers are currently not supported
	[ +24.341266] overlayfs: idmapped layers are currently not supported
	[Dec11 01:00] overlayfs: idmapped layers are currently not supported
	[Dec11 01:01] overlayfs: idmapped layers are currently not supported
	[Dec11 01:03] overlayfs: idmapped layers are currently not supported
	[Dec11 01:05] overlayfs: idmapped layers are currently not supported
	[Dec11 01:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b] <==
	{"level":"warn","ts":"2025-12-11T01:16:01.623302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.632263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.654200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.691103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.707801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.720521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.828123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-11T01:16:56.791661Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-11T01:16:56.791709Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-906108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-11T01:16:56.791812Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-11T01:16:56.932048Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-11T01:16:56.932115Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-11T01:16:56.932137Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-11T01:16:56.932234Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-11T01:16:56.932654Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-11T01:16:56.932260Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-11T01:16:56.932869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-11T01:16:56.932884Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-11T01:16:56.933264Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-11T01:16:56.933290Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-11T01:16:56.933299Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-11T01:16:56.939555Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-11T01:16:56.939650Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-11T01:16:56.939690Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-11T01:16:56.939699Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-906108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d] <==
	{"level":"warn","ts":"2025-12-11T01:17:07.907268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:07.936117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:07.949161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:07.968696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.002353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.020097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.038086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.056669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.074243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.091966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.126629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.141302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.154640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.191036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.196129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.213788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.232073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.249415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.267247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.285223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.317382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.321063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.339462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.355996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.462803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54478","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:17:26 up  1:28,  0 user,  load average: 1.08, 1.16, 1.56
	Linux pause-906108 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1] <==
	I1211 01:17:04.722855       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1211 01:17:04.723139       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1211 01:17:04.723270       1 main.go:148] setting mtu 1500 for CNI 
	I1211 01:17:04.723281       1 main.go:178] kindnetd IP family: "ipv4"
	I1211 01:17:04.723294       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-11T01:17:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1211 01:17:04.968631       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1211 01:17:04.968743       1 controller.go:381] "Waiting for informer caches to sync"
	I1211 01:17:04.968777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1211 01:17:04.974236       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1211 01:17:09.673872       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1211 01:17:09.673982       1 metrics.go:72] Registering metrics
	I1211 01:17:09.674100       1 controller.go:711] "Syncing nftables rules"
	I1211 01:17:14.968106       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1211 01:17:14.968262       1 main.go:301] handling current node
	I1211 01:17:24.968529       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1211 01:17:24.968580       1 main.go:301] handling current node
	
	
	==> kindnet [a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2] <==
	I1211 01:16:11.722836       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1211 01:16:11.723087       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1211 01:16:11.723202       1 main.go:148] setting mtu 1500 for CNI 
	I1211 01:16:11.723222       1 main.go:178] kindnetd IP family: "ipv4"
	I1211 01:16:11.723235       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-11T01:16:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1211 01:16:11.923379       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1211 01:16:11.923398       1 controller.go:381] "Waiting for informer caches to sync"
	I1211 01:16:11.923410       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1211 01:16:11.924274       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1211 01:16:41.923987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1211 01:16:41.923986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1211 01:16:41.924200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1211 01:16:41.924217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1211 01:16:43.523532       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1211 01:16:43.523565       1 metrics.go:72] Registering metrics
	I1211 01:16:43.523634       1 controller.go:711] "Syncing nftables rules"
	I1211 01:16:51.930718       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1211 01:16:51.930776       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017] <==
	I1211 01:17:09.529102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1211 01:17:09.529235       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1211 01:17:09.529507       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1211 01:17:09.532446       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1211 01:17:09.563363       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1211 01:17:09.590333       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1211 01:17:09.595076       1 policy_source.go:240] refreshing policies
	I1211 01:17:09.616842       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1211 01:17:09.624689       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1211 01:17:09.624955       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1211 01:17:09.625093       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1211 01:17:09.625174       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1211 01:17:09.625463       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1211 01:17:09.625910       1 aggregator.go:171] initial CRD sync complete...
	I1211 01:17:09.625932       1 autoregister_controller.go:144] Starting autoregister controller
	I1211 01:17:09.625939       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1211 01:17:09.625946       1 cache.go:39] Caches are synced for autoregister controller
	I1211 01:17:09.636019       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1211 01:17:09.675976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1211 01:17:10.219633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1211 01:17:11.462352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1211 01:17:13.048259       1 controller.go:667] quota admission added evaluator for: endpoints
	I1211 01:17:13.100321       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1211 01:17:13.148975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1211 01:17:13.250907       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f] <==
	W1211 01:16:56.809738       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809745       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809789       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809818       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809843       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809865       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809897       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809914       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809949       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809961       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809999       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810008       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810051       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810080       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810105       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810126       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810154       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810171       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810205       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810239       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810289       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810335       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809790       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810054       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd] <==
	I1211 01:17:12.854116       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1211 01:17:12.854121       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1211 01:17:12.874352       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 01:17:12.879531       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1211 01:17:12.879626       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1211 01:17:12.879647       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1211 01:17:12.879660       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1211 01:17:12.879667       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1211 01:17:12.888140       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1211 01:17:12.890392       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1211 01:17:12.891841       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1211 01:17:12.891870       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1211 01:17:12.893014       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1211 01:17:12.893017       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1211 01:17:12.893068       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 01:17:12.893143       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1211 01:17:12.893152       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1211 01:17:12.894832       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1211 01:17:12.894985       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1211 01:17:12.895046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1211 01:17:12.895059       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1211 01:17:12.897238       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1211 01:17:12.898789       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1211 01:17:12.901443       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1211 01:17:12.904498       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-controller-manager [91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489] <==
	I1211 01:16:09.751944       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1211 01:16:09.762008       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1211 01:16:09.763486       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1211 01:16:09.764480       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1211 01:16:09.771929       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-906108" podCIDRs=["10.244.0.0/24"]
	I1211 01:16:09.781725       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1211 01:16:09.786764       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1211 01:16:09.788021       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1211 01:16:09.790213       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1211 01:16:09.790452       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1211 01:16:09.790520       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1211 01:16:09.791094       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1211 01:16:09.791142       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1211 01:16:09.791256       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1211 01:16:09.791517       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1211 01:16:09.791712       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1211 01:16:09.792030       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-906108"
	I1211 01:16:09.792089       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1211 01:16:09.796203       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1211 01:16:09.796533       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1211 01:16:09.796726       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1211 01:16:09.807959       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 01:16:09.808192       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1211 01:16:11.119306       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1211 01:16:54.800400       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae] <==
	I1211 01:17:07.698146       1 server_linux.go:53] "Using iptables proxy"
	I1211 01:17:08.644559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 01:17:09.659147       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 01:17:09.659244       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1211 01:17:09.659353       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 01:17:09.832288       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 01:17:09.832413       1 server_linux.go:132] "Using iptables Proxier"
	I1211 01:17:09.844977       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 01:17:09.845359       1 server.go:527] "Version info" version="v1.34.2"
	I1211 01:17:09.845569       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:17:09.846851       1 config.go:200] "Starting service config controller"
	I1211 01:17:09.846919       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 01:17:09.847012       1 config.go:106] "Starting endpoint slice config controller"
	I1211 01:17:09.847044       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 01:17:09.847082       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 01:17:09.847136       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 01:17:09.847825       1 config.go:309] "Starting node config controller"
	I1211 01:17:09.847888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 01:17:09.847919       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 01:17:09.947296       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 01:17:09.947395       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 01:17:09.947459       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129] <==
	I1211 01:16:12.000002       1 server_linux.go:53] "Using iptables proxy"
	I1211 01:16:12.081107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 01:16:12.183273       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 01:16:12.183307       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1211 01:16:12.183388       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 01:16:12.202378       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 01:16:12.202506       1 server_linux.go:132] "Using iptables Proxier"
	I1211 01:16:12.206780       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 01:16:12.207319       1 server.go:527] "Version info" version="v1.34.2"
	I1211 01:16:12.207388       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:16:12.211767       1 config.go:106] "Starting endpoint slice config controller"
	I1211 01:16:12.211841       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 01:16:12.212181       1 config.go:200] "Starting service config controller"
	I1211 01:16:12.212226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 01:16:12.212560       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 01:16:12.212986       1 config.go:309] "Starting node config controller"
	I1211 01:16:12.216247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 01:16:12.216282       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 01:16:12.216585       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 01:16:12.312914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1211 01:16:12.312899       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 01:16:12.316764       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102] <==
	I1211 01:16:03.389012       1 serving.go:386] Generated self-signed cert in-memory
	I1211 01:16:05.380043       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1211 01:16:05.380084       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:16:05.385339       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1211 01:16:05.385722       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1211 01:16:05.385744       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1211 01:16:05.385770       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1211 01:16:05.391536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:05.391564       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:05.395132       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:05.395162       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:05.486672       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1211 01:16:05.492154       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:05.495785       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:56.789306       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1211 01:16:56.789334       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1211 01:16:56.789354       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1211 01:16:56.789378       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:56.789394       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:56.789422       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1211 01:16:56.789676       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1211 01:16:56.789700       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438] <==
	I1211 01:17:08.420987       1 serving.go:386] Generated self-signed cert in-memory
	I1211 01:17:10.218536       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1211 01:17:10.218632       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:17:10.225777       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1211 01:17:10.225875       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1211 01:17:10.225967       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:17:10.226003       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:17:10.226047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:17:10.226076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:17:10.226289       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1211 01:17:10.226355       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1211 01:17:10.326412       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1211 01:17:10.326625       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:17:10.327481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 11 01:17:04 pause-906108 kubelet[1307]: I1211 01:17:04.492966    1307 scope.go:117] "RemoveContainer" containerID="a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.493823    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5873c9135c76e7f09a87f38f72c0d74" pod="kube-system/kube-scheduler-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.494249    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5z5t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb2b435e-fada-4f6d-8cc1-44fd7cfca57a" pod="kube-system/kindnet-h5z5t"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.494596    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-qrtg8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a76580f4-b7b6-41c6-848e-47f2bd78b1a0" pod="kube-system/coredns-66bc5c9577-qrtg8"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.494921    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a8e85f31da104a0f4a9b474bf381a4ea" pod="kube-system/etcd-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.495254    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="84b0a1d517f5c4e9ddb51ced297e49b5" pod="kube-system/kube-apiserver-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.495562    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3826120d78bad403e9141d7ccb609af7" pod="kube-system/kube-controller-manager-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: I1211 01:17:04.531685    1307 scope.go:117] "RemoveContainer" containerID="cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.532448    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a8e85f31da104a0f4a9b474bf381a4ea" pod="kube-system/etcd-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.533060    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="84b0a1d517f5c4e9ddb51ced297e49b5" pod="kube-system/kube-apiserver-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.533558    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3826120d78bad403e9141d7ccb609af7" pod="kube-system/kube-controller-manager-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.533961    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5873c9135c76e7f09a87f38f72c0d74" pod="kube-system/kube-scheduler-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.534399    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5z5t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb2b435e-fada-4f6d-8cc1-44fd7cfca57a" pod="kube-system/kindnet-h5z5t"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.534761    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4mgks\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9a6ccf72-6f7d-4c2d-bd59-6251e435d675" pod="kube-system/kube-proxy-4mgks"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.535320    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-qrtg8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a76580f4-b7b6-41c6-848e-47f2bd78b1a0" pod="kube-system/coredns-66bc5c9577-qrtg8"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.267517    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-906108\" is forbidden: User \"system:node:pause-906108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" podUID="3826120d78bad403e9141d7ccb609af7" pod="kube-system/kube-controller-manager-pause-906108"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.268148    1307 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-906108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.268275    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-906108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.268347    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-906108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.363232    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-906108\" is forbidden: User \"system:node:pause-906108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" podUID="e5873c9135c76e7f09a87f38f72c0d74" pod="kube-system/kube-scheduler-pause-906108"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.436506    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-h5z5t\" is forbidden: User \"system:node:pause-906108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" podUID="bb2b435e-fada-4f6d-8cc1-44fd7cfca57a" pod="kube-system/kindnet-h5z5t"
	Dec 11 01:17:16 pause-906108 kubelet[1307]: W1211 01:17:16.467021    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 11 01:17:23 pause-906108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 11 01:17:23 pause-906108 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 11 01:17:23 pause-906108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-906108 -n pause-906108
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-906108 -n pause-906108: exit status 2 (347.882768ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-906108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-906108
helpers_test.go:244: (dbg) docker inspect pause-906108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536",
	        "Created": "2025-12-11T01:15:41.335968704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-11T01:15:41.417666018Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/hosts",
	        "LogPath": "/var/lib/docker/containers/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536/dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536-json.log",
	        "Name": "/pause-906108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-906108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-906108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc2c66c19aef882f3ac51f8d85667ba7936474f49087ea40b818bd2f08e0f536",
	                "LowerDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759-init/diff:/var/lib/docker/overlay2/e48d8ef9f088f299bfa69fb034f5df7b5a0e60115ac22c9dde56d9e141a3e7e1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8fdadd153c7673a541955d1e6be34dff55b8b5d811cf852bb7553fc55a7c8759/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-906108",
	                "Source": "/var/lib/docker/volumes/pause-906108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-906108",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-906108",
	                "name.minikube.sigs.k8s.io": "pause-906108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4829b21697011fd29eb4fbcca8d7c5e2dfbab21f4c9cfebb7683a32fdecc10a",
	            "SandboxKey": "/var/run/docker/netns/a4829b216970",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-906108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:7c:a9:b5:e4:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4286905c34e2d59ba56440d34ea995d51e26da2a42cbccea99ea6784d3815231",
	                    "EndpointID": "74d4b723024297c720109376c7334b6b7806e32a8f36d5d421d12a3f3dabb1ac",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-906108",
	                        "dc2c66c19aef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-906108 -n pause-906108
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-906108 -n pause-906108: exit status 2 (343.587476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-906108 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-906108 logs -n 25: (1.359599849s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-899269 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:03 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p missing-upgrade-724666 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-724666    │ jenkins │ v1.35.0 │ 11 Dec 25 01:03 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ delete  │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ ssh     │ -p NoKubernetes-899269 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │                     │
	│ stop    │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p NoKubernetes-899269 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p missing-upgrade-724666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-724666    │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:05 UTC │
	│ ssh     │ -p NoKubernetes-899269 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │                     │
	│ delete  │ -p NoKubernetes-899269                                                                                                                          │ NoKubernetes-899269       │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:04 UTC │
	│ start   │ -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:04 UTC │ 11 Dec 25 01:05 UTC │
	│ stop    │ -p kubernetes-upgrade-174503                                                                                                                    │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:05 UTC │
	│ delete  │ -p missing-upgrade-724666                                                                                                                       │ missing-upgrade-724666    │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:05 UTC │
	│ start   │ -p kubernetes-upgrade-174503 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-174503 │ jenkins │ v1.37.0 │ 11 Dec 25 01:05 UTC │                     │
	│ start   │ -p stopped-upgrade-421398 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-421398    │ jenkins │ v1.35.0 │ 11 Dec 25 01:05 UTC │ 11 Dec 25 01:06 UTC │
	│ stop    │ stopped-upgrade-421398 stop                                                                                                                     │ stopped-upgrade-421398    │ jenkins │ v1.35.0 │ 11 Dec 25 01:06 UTC │ 11 Dec 25 01:06 UTC │
	│ start   │ -p stopped-upgrade-421398 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-421398    │ jenkins │ v1.37.0 │ 11 Dec 25 01:06 UTC │ 11 Dec 25 01:10 UTC │
	│ delete  │ -p stopped-upgrade-421398                                                                                                                       │ stopped-upgrade-421398    │ jenkins │ v1.37.0 │ 11 Dec 25 01:10 UTC │ 11 Dec 25 01:10 UTC │
	│ start   │ -p running-upgrade-335241 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-335241    │ jenkins │ v1.35.0 │ 11 Dec 25 01:10 UTC │ 11 Dec 25 01:11 UTC │
	│ start   │ -p running-upgrade-335241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-335241    │ jenkins │ v1.37.0 │ 11 Dec 25 01:11 UTC │ 11 Dec 25 01:15 UTC │
	│ delete  │ -p running-upgrade-335241                                                                                                                       │ running-upgrade-335241    │ jenkins │ v1.37.0 │ 11 Dec 25 01:15 UTC │ 11 Dec 25 01:15 UTC │
	│ start   │ -p pause-906108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:15 UTC │ 11 Dec 25 01:16 UTC │
	│ start   │ -p pause-906108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:16 UTC │ 11 Dec 25 01:17 UTC │
	│ pause   │ -p pause-906108 --alsologtostderr -v=5                                                                                                          │ pause-906108              │ jenkins │ v1.37.0 │ 11 Dec 25 01:17 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/11 01:16:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 01:16:55.295745  215720 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:16:55.295944  215720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:16:55.296203  215720 out.go:374] Setting ErrFile to fd 2...
	I1211 01:16:55.296415  215720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:16:55.296705  215720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:16:55.297098  215720 out.go:368] Setting JSON to false
	I1211 01:16:55.298058  215720 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5302,"bootTime":1765410514,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 01:16:55.298137  215720 start.go:143] virtualization:  
	I1211 01:16:55.301475  215720 out.go:179] * [pause-906108] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 01:16:55.305306  215720 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 01:16:55.305449  215720 notify.go:221] Checking for updates...
	I1211 01:16:55.311361  215720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 01:16:55.314358  215720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 01:16:55.317457  215720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 01:16:55.320343  215720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 01:16:55.323171  215720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 01:16:55.326459  215720 config.go:182] Loaded profile config "pause-906108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:16:55.327090  215720 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 01:16:55.360124  215720 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 01:16:55.360264  215720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:16:55.416956  215720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-11 01:16:55.40783222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:16:55.417065  215720 docker.go:319] overlay module found
	I1211 01:16:55.420219  215720 out.go:179] * Using the docker driver based on existing profile
	I1211 01:16:55.423043  215720 start.go:309] selected driver: docker
	I1211 01:16:55.423062  215720 start.go:927] validating driver "docker" against &{Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver
-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:16:55.423194  215720 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 01:16:55.423288  215720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 01:16:55.477292  215720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-11 01:16:55.468508545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 01:16:55.477704  215720 cni.go:84] Creating CNI manager for ""
	I1211 01:16:55.477765  215720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:16:55.477812  215720 start.go:353] cluster config:
	{Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fa
lse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:16:55.481056  215720 out.go:179] * Starting "pause-906108" primary control-plane node in "pause-906108" cluster
	I1211 01:16:55.483753  215720 cache.go:134] Beginning downloading kic base image for docker with crio
	I1211 01:16:55.486736  215720 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1211 01:16:55.489552  215720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1211 01:16:55.489815  215720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 01:16:55.489844  215720 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1211 01:16:55.489859  215720 cache.go:65] Caching tarball of preloaded images
	I1211 01:16:55.489928  215720 preload.go:238] Found /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1211 01:16:55.489937  215720 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1211 01:16:55.490067  215720 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/config.json ...
	I1211 01:16:55.508254  215720 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1211 01:16:55.508280  215720 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1211 01:16:55.508296  215720 cache.go:243] Successfully downloaded all kic artifacts
	I1211 01:16:55.508325  215720 start.go:360] acquireMachinesLock for pause-906108: {Name:mk59739559c15612c7a10bb76db9c6c6334a285d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 01:16:55.508387  215720 start.go:364] duration metric: took 34.691µs to acquireMachinesLock for "pause-906108"
	I1211 01:16:55.508411  215720 start.go:96] Skipping create...Using existing machine configuration
	I1211 01:16:55.508422  215720 fix.go:54] fixHost starting: 
	I1211 01:16:55.508685  215720 cli_runner.go:164] Run: docker container inspect pause-906108 --format={{.State.Status}}
	I1211 01:16:55.528536  215720 fix.go:112] recreateIfNeeded on pause-906108: state=Running err=<nil>
	W1211 01:16:55.528578  215720 fix.go:138] unexpected machine state, will restart: <nil>
	I1211 01:16:55.531854  215720 out.go:252] * Updating the running docker "pause-906108" container ...
	I1211 01:16:55.531890  215720 machine.go:94] provisionDockerMachine start ...
	I1211 01:16:55.531966  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:55.549753  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:55.550079  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:55.550094  215720 main.go:143] libmachine: About to run SSH command:
	hostname
	I1211 01:16:55.698717  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-906108
	
	I1211 01:16:55.698742  215720 ubuntu.go:182] provisioning hostname "pause-906108"
	I1211 01:16:55.698813  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:55.717883  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:55.718229  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:55.718246  215720 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-906108 && echo "pause-906108" | sudo tee /etc/hostname
	I1211 01:16:55.880385  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-906108
	
	I1211 01:16:55.880457  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:55.899069  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:55.899377  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:55.899392  215720 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-906108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-906108/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-906108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 01:16:56.071708  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1211 01:16:56.071745  215720 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22061-2739/.minikube CaCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22061-2739/.minikube}
	I1211 01:16:56.071769  215720 ubuntu.go:190] setting up certificates
	I1211 01:16:56.071779  215720 provision.go:84] configureAuth start
	I1211 01:16:56.071844  215720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-906108
	I1211 01:16:56.090515  215720 provision.go:143] copyHostCerts
	I1211 01:16:56.090608  215720 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem, removing ...
	I1211 01:16:56.090618  215720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem
	I1211 01:16:56.090693  215720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/ca.pem (1082 bytes)
	I1211 01:16:56.090811  215720 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem, removing ...
	I1211 01:16:56.090817  215720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem
	I1211 01:16:56.090849  215720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/cert.pem (1123 bytes)
	I1211 01:16:56.090902  215720 exec_runner.go:144] found /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem, removing ...
	I1211 01:16:56.090906  215720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem
	I1211 01:16:56.090929  215720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22061-2739/.minikube/key.pem (1679 bytes)
	I1211 01:16:56.091017  215720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem org=jenkins.pause-906108 san=[127.0.0.1 192.168.85.2 localhost minikube pause-906108]
	I1211 01:16:56.410820  215720 provision.go:177] copyRemoteCerts
	I1211 01:16:56.410885  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 01:16:56.410930  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:56.432528  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:16:56.538684  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1211 01:16:56.556855  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1211 01:16:56.574263  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 01:16:56.592013  215720 provision.go:87] duration metric: took 520.209998ms to configureAuth
	I1211 01:16:56.592039  215720 ubuntu.go:206] setting minikube options for container-runtime
	I1211 01:16:56.592311  215720 config.go:182] Loaded profile config "pause-906108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:16:56.592419  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:16:56.609658  215720 main.go:143] libmachine: Using SSH client type: native
	I1211 01:16:56.609964  215720 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33030 <nil> <nil>}
	I1211 01:16:56.609986  215720 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 01:17:02.061761  215720 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 01:17:02.061790  215720 machine.go:97] duration metric: took 6.529892008s to provisionDockerMachine
	I1211 01:17:02.061803  215720 start.go:293] postStartSetup for "pause-906108" (driver="docker")
	I1211 01:17:02.061814  215720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 01:17:02.061897  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 01:17:02.061948  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.083797  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.196169  215720 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 01:17:02.200177  215720 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1211 01:17:02.200213  215720 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1211 01:17:02.200227  215720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/addons for local assets ...
	I1211 01:17:02.200296  215720 filesync.go:126] Scanning /home/jenkins/minikube-integration/22061-2739/.minikube/files for local assets ...
	I1211 01:17:02.200389  215720 filesync.go:149] local asset: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem -> 48752.pem in /etc/ssl/certs
	I1211 01:17:02.200510  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 01:17:02.209568  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:17:02.232048  215720 start.go:296] duration metric: took 170.209609ms for postStartSetup
	I1211 01:17:02.232191  215720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 01:17:02.232239  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.251247  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.357773  215720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1211 01:17:02.363561  215720 fix.go:56] duration metric: took 6.85513206s for fixHost
	I1211 01:17:02.363589  215720 start.go:83] releasing machines lock for "pause-906108", held for 6.855187782s
	I1211 01:17:02.363678  215720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-906108
	I1211 01:17:02.384696  215720 ssh_runner.go:195] Run: cat /version.json
	I1211 01:17:02.384772  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.385690  215720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 01:17:02.385781  215720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-906108
	I1211 01:17:02.413230  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.417279  215720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33030 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/pause-906108/id_rsa Username:docker}
	I1211 01:17:02.524172  215720 ssh_runner.go:195] Run: systemctl --version
	I1211 01:17:02.620040  215720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 01:17:02.680911  215720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 01:17:02.687185  215720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 01:17:02.687343  215720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 01:17:02.700286  215720 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1211 01:17:02.700358  215720 start.go:496] detecting cgroup driver to use...
	I1211 01:17:02.700407  215720 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1211 01:17:02.700490  215720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 01:17:02.723472  215720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 01:17:02.740836  215720 docker.go:218] disabling cri-docker service (if available) ...
	I1211 01:17:02.740934  215720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 01:17:02.761175  215720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 01:17:02.776274  215720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 01:17:02.926123  215720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 01:17:03.080759  215720 docker.go:234] disabling docker service ...
	I1211 01:17:03.080854  215720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 01:17:03.098821  215720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 01:17:03.115480  215720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 01:17:03.259171  215720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 01:17:03.411361  215720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 01:17:03.428164  215720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 01:17:03.445255  215720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1211 01:17:03.445399  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.456154  215720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 01:17:03.456256  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.467382  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.480004  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.489793  215720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 01:17:03.500082  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.511151  215720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.522472  215720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 01:17:03.534212  215720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 01:17:03.543301  215720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 01:17:03.551779  215720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:03.700320  215720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 01:17:03.942757  215720 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 01:17:03.942854  215720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 01:17:03.947462  215720 start.go:564] Will wait 60s for crictl version
	I1211 01:17:03.947532  215720 ssh_runner.go:195] Run: which crictl
	I1211 01:17:03.951917  215720 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1211 01:17:03.980378  215720 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1211 01:17:03.980477  215720 ssh_runner.go:195] Run: crio --version
	I1211 01:17:04.012707  215720 ssh_runner.go:195] Run: crio --version
	I1211 01:17:04.070542  215720 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1211 01:17:04.074338  215720 cli_runner.go:164] Run: docker network inspect pause-906108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1211 01:17:04.095585  215720 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1211 01:17:04.100487  215720 kubeadm.go:884] updating cluster {Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 01:17:04.100643  215720 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1211 01:17:04.100700  215720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:17:04.142959  215720 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 01:17:04.143008  215720 crio.go:433] Images already preloaded, skipping extraction
	I1211 01:17:04.143074  215720 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 01:17:04.175519  215720 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 01:17:04.175542  215720 cache_images.go:86] Images are preloaded, skipping loading
	I1211 01:17:04.175550  215720 kubeadm.go:935] updating node { 192.168.85.2  8443 v1.34.2 crio true true} ...
	I1211 01:17:04.175664  215720 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-906108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 01:17:04.175772  215720 ssh_runner.go:195] Run: crio config
	I1211 01:17:04.243345  215720 cni.go:84] Creating CNI manager for ""
	I1211 01:17:04.243385  215720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1211 01:17:04.243417  215720 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1211 01:17:04.243442  215720 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-906108 NodeName:pause-906108 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 01:17:04.243614  215720 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-906108"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 01:17:04.243697  215720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1211 01:17:04.253095  215720 binaries.go:51] Found k8s binaries, skipping transfer
	I1211 01:17:04.253244  215720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 01:17:04.262349  215720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1211 01:17:04.278639  215720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 01:17:04.295232  215720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1211 01:17:04.310317  215720 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1211 01:17:04.314866  215720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:04.514772  215720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 01:17:04.543020  215720 certs.go:69] Setting up /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108 for IP: 192.168.85.2
	I1211 01:17:04.543055  215720 certs.go:195] generating shared ca certs ...
	I1211 01:17:04.543087  215720 certs.go:227] acquiring lock for ca certs: {Name:mk762570f3fb8980e7332d0ab5090c94eedaf31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:04.543265  215720 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key
	I1211 01:17:04.543340  215720 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key
	I1211 01:17:04.543356  215720 certs.go:257] generating profile certs ...
	I1211 01:17:04.543512  215720 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.key
	I1211 01:17:04.543599  215720 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/apiserver.key.520b1307
	I1211 01:17:04.543663  215720 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/proxy-client.key
	I1211 01:17:04.543815  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem (1338 bytes)
	W1211 01:17:04.543867  215720 certs.go:480] ignoring /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875_empty.pem, impossibly tiny 0 bytes
	I1211 01:17:04.543881  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca-key.pem (1679 bytes)
	I1211 01:17:04.543918  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/ca.pem (1082 bytes)
	I1211 01:17:04.543957  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/cert.pem (1123 bytes)
	I1211 01:17:04.543987  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/certs/key.pem (1679 bytes)
	I1211 01:17:04.544048  215720 certs.go:484] found cert: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem (1708 bytes)
	I1211 01:17:04.544792  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 01:17:04.574562  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1211 01:17:04.609768  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 01:17:04.654389  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 01:17:04.727758  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 01:17:04.778641  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 01:17:04.820874  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 01:17:04.854097  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 01:17:04.884578  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 01:17:04.944219  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/certs/4875.pem --> /usr/share/ca-certificates/4875.pem (1338 bytes)
	I1211 01:17:04.988947  215720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/ssl/certs/48752.pem --> /usr/share/ca-certificates/48752.pem (1708 bytes)
	I1211 01:17:05.024876  215720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 01:17:05.045324  215720 ssh_runner.go:195] Run: openssl version
	I1211 01:17:05.058139  215720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.067768  215720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1211 01:17:05.080781  215720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.088506  215720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 23:52 /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.088604  215720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 01:17:05.149983  215720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1211 01:17:05.160134  215720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.170485  215720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4875.pem /etc/ssl/certs/4875.pem
	I1211 01:17:05.180464  215720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.186214  215720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 00:03 /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.186297  215720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4875.pem
	I1211 01:17:05.235516  215720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1211 01:17:05.245543  215720 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.255398  215720 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/48752.pem /etc/ssl/certs/48752.pem
	I1211 01:17:05.265364  215720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.270907  215720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 00:03 /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.271060  215720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/48752.pem
	I1211 01:17:05.317990  215720 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1211 01:17:05.329647  215720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 01:17:05.336478  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1211 01:17:05.385326  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1211 01:17:05.431536  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1211 01:17:05.485492  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1211 01:17:05.544249  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1211 01:17:05.612935  215720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1211 01:17:05.700090  215720 kubeadm.go:401] StartCluster: {Name:pause-906108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-906108 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvi
dia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 01:17:05.700247  215720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 01:17:05.700339  215720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 01:17:05.777868  215720 cri.go:89] found id: "4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae"
	I1211 01:17:05.777916  215720 cri.go:89] found id: "fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438"
	I1211 01:17:05.777923  215720 cri.go:89] found id: "df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d"
	I1211 01:17:05.777927  215720 cri.go:89] found id: "1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017"
	I1211 01:17:05.777931  215720 cri.go:89] found id: "c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d"
	I1211 01:17:05.777947  215720 cri.go:89] found id: "09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd"
	I1211 01:17:05.777959  215720 cri.go:89] found id: "8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1"
	I1211 01:17:05.777966  215720 cri.go:89] found id: "fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233"
	I1211 01:17:05.777970  215720 cri.go:89] found id: "cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	I1211 01:17:05.777978  215720 cri.go:89] found id: "a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2"
	I1211 01:17:05.777986  215720 cri.go:89] found id: "dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f"
	I1211 01:17:05.777990  215720 cri.go:89] found id: "9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b"
	I1211 01:17:05.777994  215720 cri.go:89] found id: "91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489"
	I1211 01:17:05.777998  215720 cri.go:89] found id: "a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	I1211 01:17:05.778001  215720 cri.go:89] found id: ""
	I1211 01:17:05.778071  215720 ssh_runner.go:195] Run: sudo runc list -f json
	W1211 01:17:05.797915  215720 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-11T01:17:05Z" level=error msg="open /run/runc: no such file or directory"
	I1211 01:17:05.798025  215720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 01:17:05.812393  215720 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1211 01:17:05.812434  215720 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1211 01:17:05.812503  215720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1211 01:17:05.822507  215720 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1211 01:17:05.823317  215720 kubeconfig.go:125] found "pause-906108" server: "https://192.168.85.2:8443"
	I1211 01:17:05.826541  215720 kapi.go:59] client config for pause-906108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 01:17:05.827604  215720 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1211 01:17:05.827639  215720 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 01:17:05.827646  215720 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1211 01:17:05.827664  215720 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1211 01:17:05.827775  215720 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 01:17:05.828410  215720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1211 01:17:05.850454  215720 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1211 01:17:05.850493  215720 kubeadm.go:602] duration metric: took 38.051478ms to restartPrimaryControlPlane
	I1211 01:17:05.850504  215720 kubeadm.go:403] duration metric: took 150.423895ms to StartCluster
	I1211 01:17:05.850522  215720 settings.go:142] acquiring lock: {Name:mka61ebe499f15c79a43622cbdfdcf3261b6de4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:05.850595  215720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 01:17:05.851522  215720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/kubeconfig: {Name:mke5ac8842cd78a47390269a3f7c36dd976986aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 01:17:05.851807  215720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 01:17:05.852143  215720 config.go:182] Loaded profile config "pause-906108": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:17:05.852454  215720 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 01:17:05.855385  215720 out.go:179] * Verifying Kubernetes components...
	I1211 01:17:05.857264  215720 out.go:179] * Enabled addons: 
	I1211 01:17:05.859319  215720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 01:17:05.861126  215720 addons.go:530] duration metric: took 8.673151ms for enable addons: enabled=[]
	I1211 01:17:06.098574  215720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 01:17:06.117657  215720 node_ready.go:35] waiting up to 6m0s for node "pause-906108" to be "Ready" ...
	I1211 01:17:09.563158  215720 node_ready.go:49] node "pause-906108" is "Ready"
	I1211 01:17:09.563184  215720 node_ready.go:38] duration metric: took 3.445495835s for node "pause-906108" to be "Ready" ...
	I1211 01:17:09.563198  215720 api_server.go:52] waiting for apiserver process to appear ...
	I1211 01:17:09.563260  215720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 01:17:09.583554  215720 api_server.go:72] duration metric: took 3.731705299s to wait for apiserver process to appear ...
	I1211 01:17:09.583576  215720 api_server.go:88] waiting for apiserver healthz status ...
	I1211 01:17:09.583594  215720 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1211 01:17:09.649813  215720 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1211 01:17:09.649881  215720 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1211 01:17:10.084554  215720 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1211 01:17:10.092844  215720 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1211 01:17:10.092914  215720 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1211 01:17:10.584652  215720 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1211 01:17:10.592732  215720 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1211 01:17:10.594094  215720 api_server.go:141] control plane version: v1.34.2
	I1211 01:17:10.594126  215720 api_server.go:131] duration metric: took 1.010542684s to wait for apiserver health ...
	I1211 01:17:10.594136  215720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 01:17:10.597318  215720 system_pods.go:59] 7 kube-system pods found
	I1211 01:17:10.597368  215720 system_pods.go:61] "coredns-66bc5c9577-qrtg8" [a76580f4-b7b6-41c6-848e-47f2bd78b1a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 01:17:10.597378  215720 system_pods.go:61] "etcd-pause-906108" [d0ddab26-b8d9-4436-9c69-fd89c373c180] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1211 01:17:10.597384  215720 system_pods.go:61] "kindnet-h5z5t" [bb2b435e-fada-4f6d-8cc1-44fd7cfca57a] Running
	I1211 01:17:10.597391  215720 system_pods.go:61] "kube-apiserver-pause-906108" [62c83569-e6b3-4f43-b3c2-2b7f703fcf9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1211 01:17:10.597403  215720 system_pods.go:61] "kube-controller-manager-pause-906108" [a3a4095f-c17f-4fb6-b53d-2e932feb6cca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1211 01:17:10.597411  215720 system_pods.go:61] "kube-proxy-4mgks" [9a6ccf72-6f7d-4c2d-bd59-6251e435d675] Running
	I1211 01:17:10.597417  215720 system_pods.go:61] "kube-scheduler-pause-906108" [14c8467d-2742-4ef4-a0d7-d516d80f9913] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1211 01:17:10.597423  215720 system_pods.go:74] duration metric: took 3.281114ms to wait for pod list to return data ...
	I1211 01:17:10.597434  215720 default_sa.go:34] waiting for default service account to be created ...
	I1211 01:17:10.600070  215720 default_sa.go:45] found service account: "default"
	I1211 01:17:10.600143  215720 default_sa.go:55] duration metric: took 2.702541ms for default service account to be created ...
	I1211 01:17:10.600159  215720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 01:17:10.604777  215720 system_pods.go:86] 7 kube-system pods found
	I1211 01:17:10.604814  215720 system_pods.go:89] "coredns-66bc5c9577-qrtg8" [a76580f4-b7b6-41c6-848e-47f2bd78b1a0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1211 01:17:10.604827  215720 system_pods.go:89] "etcd-pause-906108" [d0ddab26-b8d9-4436-9c69-fd89c373c180] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1211 01:17:10.604833  215720 system_pods.go:89] "kindnet-h5z5t" [bb2b435e-fada-4f6d-8cc1-44fd7cfca57a] Running
	I1211 01:17:10.604841  215720 system_pods.go:89] "kube-apiserver-pause-906108" [62c83569-e6b3-4f43-b3c2-2b7f703fcf9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1211 01:17:10.604849  215720 system_pods.go:89] "kube-controller-manager-pause-906108" [a3a4095f-c17f-4fb6-b53d-2e932feb6cca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1211 01:17:10.604857  215720 system_pods.go:89] "kube-proxy-4mgks" [9a6ccf72-6f7d-4c2d-bd59-6251e435d675] Running
	I1211 01:17:10.604865  215720 system_pods.go:89] "kube-scheduler-pause-906108" [14c8467d-2742-4ef4-a0d7-d516d80f9913] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1211 01:17:10.604878  215720 system_pods.go:126] duration metric: took 4.712681ms to wait for k8s-apps to be running ...
	I1211 01:17:10.604887  215720 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 01:17:10.604951  215720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 01:17:10.627406  215720 system_svc.go:56] duration metric: took 22.510351ms WaitForService to wait for kubelet
	I1211 01:17:10.627433  215720 kubeadm.go:587] duration metric: took 4.775588703s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 01:17:10.627453  215720 node_conditions.go:102] verifying NodePressure condition ...
	I1211 01:17:10.646019  215720 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1211 01:17:10.646048  215720 node_conditions.go:123] node cpu capacity is 2
	I1211 01:17:10.646061  215720 node_conditions.go:105] duration metric: took 18.603379ms to run NodePressure ...
	I1211 01:17:10.646074  215720 start.go:242] waiting for startup goroutines ...
	I1211 01:17:10.646081  215720 start.go:247] waiting for cluster config update ...
	I1211 01:17:10.646090  215720 start.go:256] writing updated cluster config ...
	I1211 01:17:10.646403  215720 ssh_runner.go:195] Run: rm -f paused
	I1211 01:17:10.650021  215720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 01:17:10.650746  215720 kapi.go:59] client config for pause-906108: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.crt", KeyFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/profiles/pause-906108/client.key", CAFile:"/home/jenkins/minikube-integration/22061-2739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4f10), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 01:17:10.658801  215720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qrtg8" in "kube-system" namespace to be "Ready" or be gone ...
	W1211 01:17:12.664611  215720 pod_ready.go:104] pod "coredns-66bc5c9577-qrtg8" is not "Ready", error: <nil>
	W1211 01:17:14.666764  215720 pod_ready.go:104] pod "coredns-66bc5c9577-qrtg8" is not "Ready", error: <nil>
	I1211 01:17:16.666190  215720 pod_ready.go:94] pod "coredns-66bc5c9577-qrtg8" is "Ready"
	I1211 01:17:16.666222  215720 pod_ready.go:86] duration metric: took 6.007344743s for pod "coredns-66bc5c9577-qrtg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:16.672412  215720 pod_ready.go:83] waiting for pod "etcd-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	W1211 01:17:18.677969  215720 pod_ready.go:104] pod "etcd-pause-906108" is not "Ready", error: <nil>
	W1211 01:17:20.678223  215720 pod_ready.go:104] pod "etcd-pause-906108" is not "Ready", error: <nil>
	I1211 01:17:21.177359  215720 pod_ready.go:94] pod "etcd-pause-906108" is "Ready"
	I1211 01:17:21.177389  215720 pod_ready.go:86] duration metric: took 4.504947347s for pod "etcd-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:21.179761  215720 pod_ready.go:83] waiting for pod "kube-apiserver-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:21.184373  215720 pod_ready.go:94] pod "kube-apiserver-pause-906108" is "Ready"
	I1211 01:17:21.184396  215720 pod_ready.go:86] duration metric: took 4.608492ms for pod "kube-apiserver-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:21.186613  215720 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.692047  215720 pod_ready.go:94] pod "kube-controller-manager-pause-906108" is "Ready"
	I1211 01:17:22.692076  215720 pod_ready.go:86] duration metric: took 1.505439195s for pod "kube-controller-manager-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.694082  215720 pod_ready.go:83] waiting for pod "kube-proxy-4mgks" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.698001  215720 pod_ready.go:94] pod "kube-proxy-4mgks" is "Ready"
	I1211 01:17:22.698025  215720 pod_ready.go:86] duration metric: took 3.916795ms for pod "kube-proxy-4mgks" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:22.775717  215720 pod_ready.go:83] waiting for pod "kube-scheduler-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:23.176115  215720 pod_ready.go:94] pod "kube-scheduler-pause-906108" is "Ready"
	I1211 01:17:23.176145  215720 pod_ready.go:86] duration metric: took 400.402204ms for pod "kube-scheduler-pause-906108" in "kube-system" namespace to be "Ready" or be gone ...
	I1211 01:17:23.176158  215720 pod_ready.go:40] duration metric: took 12.526059058s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1211 01:17:23.231645  215720 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1211 01:17:23.235297  215720 out.go:179] * Done! kubectl is now configured to use "pause-906108" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.662674485Z" level=info msg="Creating container: kube-system/kube-proxy-4mgks/kube-proxy" id=dcb803c5-b04f-422c-9091-2c767d8d58ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.670506243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.681862334Z" level=info msg="Created container df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d: kube-system/etcd-pause-906108/etcd" id=c2dad993-771e-4c1c-84f2-0c64e06d5d9f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.68259833Z" level=info msg="Starting container: df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d" id=7dccae40-99e5-42db-bebf-d0e58738b1e1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.703644629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.704225934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.713798545Z" level=info msg="Started container" PID=2237 containerID=df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d description=kube-system/etcd-pause-906108/etcd id=7dccae40-99e5-42db-bebf-d0e58738b1e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a953e50a768c88584be1b577bc0d9157848ba25ebba8bf55c76bad24776d8e9
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.756632177Z" level=info msg="Created container fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438: kube-system/kube-scheduler-pause-906108/kube-scheduler" id=43777fdf-692b-4e31-a90d-d54fc7ad0613 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.759766098Z" level=info msg="Starting container: fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438" id=2dba024d-b8e5-4050-a9f1-dd7093eff74a name=/runtime.v1.RuntimeService/StartContainer
	Dec 11 01:17:04 pause-906108 crio[2050]: time="2025-12-11T01:17:04.763282807Z" level=info msg="Started container" PID=2258 containerID=fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438 description=kube-system/kube-scheduler-pause-906108/kube-scheduler id=2dba024d-b8e5-4050-a9f1-dd7093eff74a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f678fd8871f070c0c788d8295cb6965fa502f2a44e05564ff799a41e8a1b6a07
	Dec 11 01:17:05 pause-906108 crio[2050]: time="2025-12-11T01:17:05.103436752Z" level=info msg="Created container 4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae: kube-system/kube-proxy-4mgks/kube-proxy" id=dcb803c5-b04f-422c-9091-2c767d8d58ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 11 01:17:05 pause-906108 crio[2050]: time="2025-12-11T01:17:05.104282436Z" level=info msg="Starting container: 4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae" id=fc3d755a-99ac-4173-8314-92623f9c3dd6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 11 01:17:05 pause-906108 crio[2050]: time="2025-12-11T01:17:05.108425578Z" level=info msg="Started container" PID=2272 containerID=4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae description=kube-system/kube-proxy-4mgks/kube-proxy id=fc3d755a-99ac-4173-8314-92623f9c3dd6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1aeea51ab827ec1b70e3f2920d7537ef0bd6d92706762bf0355842159be92352
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.968596546Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.972217043Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.972251513Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.972274923Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.976000216Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.976049111Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.976069214Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.979431928Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.979466472Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.979489709Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.982734163Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 11 01:17:14 pause-906108 crio[2050]: time="2025-12-11T01:17:14.982897635Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4a34a124a3f7d       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   23 seconds ago       Running             kube-proxy                1                   1aeea51ab827e       kube-proxy-4mgks                       kube-system
	fcc376b66efd9       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   23 seconds ago       Running             kube-scheduler            1                   f678fd8871f07       kube-scheduler-pause-906108            kube-system
	df65390879ea7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   23 seconds ago       Running             etcd                      1                   4a953e50a768c       etcd-pause-906108                      kube-system
	1992792c47a16       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   7bf03e7d0de1f       kube-apiserver-pause-906108            kube-system
	c21800dea36b1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   9019270d447c6       coredns-66bc5c9577-qrtg8               kube-system
	09093121fe417       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   24 seconds ago       Running             kube-controller-manager   1                   25d083c844afe       kube-controller-manager-pause-906108   kube-system
	8b74cb1d38f11       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   00e03dc535245       kindnet-h5z5t                          kube-system
	fc6d16835668f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   9019270d447c6       coredns-66bc5c9577-qrtg8               kube-system
	cd13342125fd1       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   1aeea51ab827e       kube-proxy-4mgks                       kube-system
	a9a4d4edb8cb9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   00e03dc535245       kindnet-h5z5t                          kube-system
	dc18fcddd44f0       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   7bf03e7d0de1f       kube-apiserver-pause-906108            kube-system
	9170d36b0b6a8       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   4a953e50a768c       etcd-pause-906108                      kube-system
	91153c0ce99a5       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   25d083c844afe       kube-controller-manager-pause-906108   kube-system
	a1c6a7de725aa       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   f678fd8871f07       kube-scheduler-pause-906108            kube-system
	
	
	==> coredns [c21800dea36b1235bc19af6c60107ba72a3d7395c740b35bb39bb189de7cac2d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46510 - 42823 "HINFO IN 3458944203887410954.8276805735454187518. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041293906s
	
	
	==> coredns [fc6d16835668ff1a4746f528e3b3c0e740f643e3e47ed3800a857dad060da233] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60910 - 56665 "HINFO IN 3401748386353232504.324323302585184621. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.038281407s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-906108
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-906108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=42fb307a02c73788d50678300cb26a417bbce5b6
	                    minikube.k8s.io/name=pause-906108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_11T01_16_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 11 Dec 2025 01:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-906108
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 11 Dec 2025 01:17:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:15:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:15:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:15:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 11 Dec 2025 01:17:18 +0000   Thu, 11 Dec 2025 01:16:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-906108
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                51ae9a37-8fb0-44d0-8efd-661f74472e17
	  Boot ID:                    0edab61d-52b1-4525-85dd-848bc0b1d36e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-qrtg8                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-906108                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-h5z5t                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-906108             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-906108    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-4mgks                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-906108             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Warning  CgroupV1                 90s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-906108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-906108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     90s (x8 over 90s)  kubelet          Node pause-906108 status is now: NodeHasSufficientPID
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node pause-906108 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node pause-906108 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s                kubelet          Node pause-906108 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-906108 event: Registered Node pause-906108 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-906108 status is now: NodeReady
	  Normal   RegisteredNode           16s                node-controller  Node pause-906108 event: Registered Node pause-906108 in Controller
	
	
	==> dmesg <==
	[  +3.845979] overlayfs: idmapped layers are currently not supported
	[Dec11 00:41] overlayfs: idmapped layers are currently not supported
	[Dec11 00:42] overlayfs: idmapped layers are currently not supported
	[ +51.416292] overlayfs: idmapped layers are currently not supported
	[  +3.779669] overlayfs: idmapped layers are currently not supported
	[Dec11 00:43] overlayfs: idmapped layers are currently not supported
	[Dec11 00:44] overlayfs: idmapped layers are currently not supported
	[Dec11 00:45] overlayfs: idmapped layers are currently not supported
	[Dec11 00:50] overlayfs: idmapped layers are currently not supported
	[Dec11 00:51] overlayfs: idmapped layers are currently not supported
	[Dec11 00:52] overlayfs: idmapped layers are currently not supported
	[Dec11 00:53] overlayfs: idmapped layers are currently not supported
	[Dec11 00:54] overlayfs: idmapped layers are currently not supported
	[Dec11 00:56] overlayfs: idmapped layers are currently not supported
	[ +19.086026] overlayfs: idmapped layers are currently not supported
	[Dec11 00:57] overlayfs: idmapped layers are currently not supported
	[ +53.287901] overlayfs: idmapped layers are currently not supported
	[Dec11 00:58] overlayfs: idmapped layers are currently not supported
	[Dec11 00:59] overlayfs: idmapped layers are currently not supported
	[ +24.341266] overlayfs: idmapped layers are currently not supported
	[Dec11 01:00] overlayfs: idmapped layers are currently not supported
	[Dec11 01:01] overlayfs: idmapped layers are currently not supported
	[Dec11 01:03] overlayfs: idmapped layers are currently not supported
	[Dec11 01:05] overlayfs: idmapped layers are currently not supported
	[Dec11 01:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9170d36b0b6a82e911ab4b0f2d16d7b55f5425d1e296ff92be798fe8a5f0ed3b] <==
	{"level":"warn","ts":"2025-12-11T01:16:01.623302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.632263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.654200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.691103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.707801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.720521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:16:01.828123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36072","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-11T01:16:56.791661Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-11T01:16:56.791709Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-906108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-11T01:16:56.791812Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-11T01:16:56.932048Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-11T01:16:56.932115Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-11T01:16:56.932137Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-11T01:16:56.932234Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-11T01:16:56.932654Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-11T01:16:56.932260Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-11T01:16:56.932869Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-11T01:16:56.932884Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-11T01:16:56.933264Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-11T01:16:56.933290Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-11T01:16:56.933299Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-11T01:16:56.939555Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-11T01:16:56.939650Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-11T01:16:56.939690Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-11T01:16:56.939699Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-906108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [df65390879ea744eb9694c52dc6683a52cadd33a2790391c7abb7452db38dd7d] <==
	{"level":"warn","ts":"2025-12-11T01:17:07.907268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:07.936117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:07.949161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:07.968696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.002353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.020097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.038086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.056669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.074243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.091966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.126629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.141302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.154640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.191036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.196129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.213788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.232073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.249415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.267247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.285223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.317382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.321063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.339462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.355996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-11T01:17:08.462803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54478","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:17:28 up  1:28,  0 user,  load average: 1.07, 1.16, 1.55
	Linux pause-906108 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b74cb1d38f112ce984a689944977360a5b812999929e2884f4bb9eae9a856c1] <==
	I1211 01:17:04.722855       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1211 01:17:04.723139       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1211 01:17:04.723270       1 main.go:148] setting mtu 1500 for CNI 
	I1211 01:17:04.723281       1 main.go:178] kindnetd IP family: "ipv4"
	I1211 01:17:04.723294       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-11T01:17:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1211 01:17:04.968631       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1211 01:17:04.968743       1 controller.go:381] "Waiting for informer caches to sync"
	I1211 01:17:04.968777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1211 01:17:04.974236       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1211 01:17:09.673872       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1211 01:17:09.673982       1 metrics.go:72] Registering metrics
	I1211 01:17:09.674100       1 controller.go:711] "Syncing nftables rules"
	I1211 01:17:14.968106       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1211 01:17:14.968262       1 main.go:301] handling current node
	I1211 01:17:24.968529       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1211 01:17:24.968580       1 main.go:301] handling current node
	
	
	==> kindnet [a9a4d4edb8cb9201644a5bbda87414c603cb2897ba2016ed921d7f6b1ee3dcd2] <==
	I1211 01:16:11.722836       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1211 01:16:11.723087       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1211 01:16:11.723202       1 main.go:148] setting mtu 1500 for CNI 
	I1211 01:16:11.723222       1 main.go:178] kindnetd IP family: "ipv4"
	I1211 01:16:11.723235       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-11T01:16:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1211 01:16:11.923379       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1211 01:16:11.923398       1 controller.go:381] "Waiting for informer caches to sync"
	I1211 01:16:11.923410       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1211 01:16:11.924274       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1211 01:16:41.923987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1211 01:16:41.923986       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1211 01:16:41.924200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1211 01:16:41.924217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1211 01:16:43.523532       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1211 01:16:43.523565       1 metrics.go:72] Registering metrics
	I1211 01:16:43.523634       1 controller.go:711] "Syncing nftables rules"
	I1211 01:16:51.930718       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1211 01:16:51.930776       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1992792c47a16777ad66c1a99212b9a1b7dadbeab669706d8d679e4d48738017] <==
	I1211 01:17:09.529102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1211 01:17:09.529235       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1211 01:17:09.529507       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1211 01:17:09.532446       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1211 01:17:09.563363       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1211 01:17:09.590333       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1211 01:17:09.595076       1 policy_source.go:240] refreshing policies
	I1211 01:17:09.616842       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1211 01:17:09.624689       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1211 01:17:09.624955       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1211 01:17:09.625093       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1211 01:17:09.625174       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1211 01:17:09.625463       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1211 01:17:09.625910       1 aggregator.go:171] initial CRD sync complete...
	I1211 01:17:09.625932       1 autoregister_controller.go:144] Starting autoregister controller
	I1211 01:17:09.625939       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1211 01:17:09.625946       1 cache.go:39] Caches are synced for autoregister controller
	I1211 01:17:09.636019       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1211 01:17:09.675976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1211 01:17:10.219633       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1211 01:17:11.462352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1211 01:17:13.048259       1 controller.go:667] quota admission added evaluator for: endpoints
	I1211 01:17:13.100321       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1211 01:17:13.148975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1211 01:17:13.250907       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [dc18fcddd44f0ab62893e28000e6bcab5189fee6cce998d93bd07b48e01ea24f] <==
	W1211 01:16:56.809738       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809745       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809789       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809818       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809843       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809865       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809897       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809914       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809949       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809961       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809999       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810008       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810051       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810080       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810105       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810126       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810154       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810171       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810205       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810239       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810289       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810335       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.809790       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1211 01:16:56.810054       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [09093121fe4175a382cab89889ce433e7710139bf96be3c6a2cf7762b35e1ddd] <==
	I1211 01:17:12.854116       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1211 01:17:12.854121       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1211 01:17:12.874352       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 01:17:12.879531       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1211 01:17:12.879626       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1211 01:17:12.879647       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1211 01:17:12.879660       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1211 01:17:12.879667       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1211 01:17:12.888140       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1211 01:17:12.890392       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1211 01:17:12.891841       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1211 01:17:12.891870       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1211 01:17:12.893014       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1211 01:17:12.893017       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1211 01:17:12.893068       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 01:17:12.893143       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1211 01:17:12.893152       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1211 01:17:12.894832       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1211 01:17:12.894985       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1211 01:17:12.895046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1211 01:17:12.895059       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1211 01:17:12.897238       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1211 01:17:12.898789       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1211 01:17:12.901443       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1211 01:17:12.904498       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-controller-manager [91153c0ce99a5d6cde3436e8e7b1b02b483e863ea55523fbc027fc6fb8830489] <==
	I1211 01:16:09.751944       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1211 01:16:09.762008       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1211 01:16:09.763486       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1211 01:16:09.764480       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1211 01:16:09.771929       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-906108" podCIDRs=["10.244.0.0/24"]
	I1211 01:16:09.781725       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1211 01:16:09.786764       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1211 01:16:09.788021       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1211 01:16:09.790213       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1211 01:16:09.790452       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1211 01:16:09.790520       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1211 01:16:09.791094       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1211 01:16:09.791142       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1211 01:16:09.791256       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1211 01:16:09.791517       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1211 01:16:09.791712       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1211 01:16:09.792030       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-906108"
	I1211 01:16:09.792089       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1211 01:16:09.796203       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1211 01:16:09.796533       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1211 01:16:09.796726       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1211 01:16:09.807959       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1211 01:16:09.808192       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1211 01:16:11.119306       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1211 01:16:54.800400       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4a34a124a3f7d4d6a24f9b8d4ef8cfa36571cedc2742014cdf80f5b6ab4196ae] <==
	I1211 01:17:07.698146       1 server_linux.go:53] "Using iptables proxy"
	I1211 01:17:08.644559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 01:17:09.659147       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 01:17:09.659244       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1211 01:17:09.659353       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 01:17:09.832288       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 01:17:09.832413       1 server_linux.go:132] "Using iptables Proxier"
	I1211 01:17:09.844977       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 01:17:09.845359       1 server.go:527] "Version info" version="v1.34.2"
	I1211 01:17:09.845569       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:17:09.846851       1 config.go:200] "Starting service config controller"
	I1211 01:17:09.846919       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 01:17:09.847012       1 config.go:106] "Starting endpoint slice config controller"
	I1211 01:17:09.847044       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 01:17:09.847082       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 01:17:09.847136       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 01:17:09.847825       1 config.go:309] "Starting node config controller"
	I1211 01:17:09.847888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 01:17:09.847919       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 01:17:09.947296       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1211 01:17:09.947395       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 01:17:09.947459       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129] <==
	I1211 01:16:12.000002       1 server_linux.go:53] "Using iptables proxy"
	I1211 01:16:12.081107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1211 01:16:12.183273       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1211 01:16:12.183307       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1211 01:16:12.183388       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 01:16:12.202378       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1211 01:16:12.202506       1 server_linux.go:132] "Using iptables Proxier"
	I1211 01:16:12.206780       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 01:16:12.207319       1 server.go:527] "Version info" version="v1.34.2"
	I1211 01:16:12.207388       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:16:12.211767       1 config.go:106] "Starting endpoint slice config controller"
	I1211 01:16:12.211841       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1211 01:16:12.212181       1 config.go:200] "Starting service config controller"
	I1211 01:16:12.212226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1211 01:16:12.212560       1 config.go:403] "Starting serviceCIDR config controller"
	I1211 01:16:12.212986       1 config.go:309] "Starting node config controller"
	I1211 01:16:12.216247       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1211 01:16:12.216282       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1211 01:16:12.216585       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1211 01:16:12.312914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1211 01:16:12.312899       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1211 01:16:12.316764       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102] <==
	I1211 01:16:03.389012       1 serving.go:386] Generated self-signed cert in-memory
	I1211 01:16:05.380043       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1211 01:16:05.380084       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:16:05.385339       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1211 01:16:05.385722       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1211 01:16:05.385744       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1211 01:16:05.385770       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1211 01:16:05.391536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:05.391564       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:05.395132       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:05.395162       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:05.486672       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1211 01:16:05.492154       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:05.495785       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:56.789306       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1211 01:16:56.789334       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1211 01:16:56.789354       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1211 01:16:56.789378       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:16:56.789394       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:16:56.789422       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1211 01:16:56.789676       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1211 01:16:56.789700       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fcc376b66efd906acbf3a9ecf55c60961139a2792a2bef9cab42628ca81ee438] <==
	I1211 01:17:08.420987       1 serving.go:386] Generated self-signed cert in-memory
	I1211 01:17:10.218536       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1211 01:17:10.218632       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 01:17:10.225777       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1211 01:17:10.225875       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1211 01:17:10.225967       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:17:10.226003       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:17:10.226047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:17:10.226076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1211 01:17:10.226289       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1211 01:17:10.226355       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1211 01:17:10.326412       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1211 01:17:10.326625       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1211 01:17:10.327481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 11 01:17:04 pause-906108 kubelet[1307]: I1211 01:17:04.492966    1307 scope.go:117] "RemoveContainer" containerID="a1c6a7de725aa9ce59a27fb9ac733620c9e8693c294ad16001331a7259eec102"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.493823    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5873c9135c76e7f09a87f38f72c0d74" pod="kube-system/kube-scheduler-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.494249    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5z5t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb2b435e-fada-4f6d-8cc1-44fd7cfca57a" pod="kube-system/kindnet-h5z5t"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.494596    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-qrtg8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a76580f4-b7b6-41c6-848e-47f2bd78b1a0" pod="kube-system/coredns-66bc5c9577-qrtg8"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.494921    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a8e85f31da104a0f4a9b474bf381a4ea" pod="kube-system/etcd-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.495254    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="84b0a1d517f5c4e9ddb51ced297e49b5" pod="kube-system/kube-apiserver-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.495562    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3826120d78bad403e9141d7ccb609af7" pod="kube-system/kube-controller-manager-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: I1211 01:17:04.531685    1307 scope.go:117] "RemoveContainer" containerID="cd13342125fd10a41922871807b4e453b5ebd8d38156e7f6d227363ffdddd129"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.532448    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a8e85f31da104a0f4a9b474bf381a4ea" pod="kube-system/etcd-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.533060    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="84b0a1d517f5c4e9ddb51ced297e49b5" pod="kube-system/kube-apiserver-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.533558    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3826120d78bad403e9141d7ccb609af7" pod="kube-system/kube-controller-manager-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.533961    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-906108\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e5873c9135c76e7f09a87f38f72c0d74" pod="kube-system/kube-scheduler-pause-906108"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.534399    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5z5t\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bb2b435e-fada-4f6d-8cc1-44fd7cfca57a" pod="kube-system/kindnet-h5z5t"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.534761    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4mgks\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9a6ccf72-6f7d-4c2d-bd59-6251e435d675" pod="kube-system/kube-proxy-4mgks"
	Dec 11 01:17:04 pause-906108 kubelet[1307]: E1211 01:17:04.535320    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-qrtg8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a76580f4-b7b6-41c6-848e-47f2bd78b1a0" pod="kube-system/coredns-66bc5c9577-qrtg8"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.267517    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-906108\" is forbidden: User \"system:node:pause-906108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" podUID="3826120d78bad403e9141d7ccb609af7" pod="kube-system/kube-controller-manager-pause-906108"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.268148    1307 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-906108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.268275    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-906108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.268347    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-906108\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.363232    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-906108\" is forbidden: User \"system:node:pause-906108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" podUID="e5873c9135c76e7f09a87f38f72c0d74" pod="kube-system/kube-scheduler-pause-906108"
	Dec 11 01:17:09 pause-906108 kubelet[1307]: E1211 01:17:09.436506    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-h5z5t\" is forbidden: User \"system:node:pause-906108\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-906108' and this object" podUID="bb2b435e-fada-4f6d-8cc1-44fd7cfca57a" pod="kube-system/kindnet-h5z5t"
	Dec 11 01:17:16 pause-906108 kubelet[1307]: W1211 01:17:16.467021    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 11 01:17:23 pause-906108 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 11 01:17:23 pause-906108 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 11 01:17:23 pause-906108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-906108 -n pause-906108
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-906108 -n pause-906108: exit status 2 (356.748856ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-906108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7200.062s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1211 01:47:57.106248    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/default-k8s-diff-port-241494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1211 01:48:30.613028    4875 config.go:182] Loaded profile config "kindnet-502878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1211 01:50:09.647963    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1211 01:50:15.960973    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1211 01:50:32.355749    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/old-k8s-version-281179/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (32m42s)
		TestNetworkPlugins/group/calico (1m36s)
		TestStartStop (35m4s)
		TestStartStop/group/no-preload (28m25s)
		TestStartStop/group/no-preload/serial (28m25s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (3m13s)

                                                
                                                
goroutine 6085 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000484c40, 0x40006edbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40004f84b0, {0x534c680, 0x2c, 0x2c}, {0x40006edd08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40006994a0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40006994a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 3743 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3742
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5638 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f0ad80, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5636
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 183 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x4001656fc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 160
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 168 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x4004fb7f40, 0x40000d8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0xf0?, 0x4004fb7f40, 0x4004fb7f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400011e600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 184
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4240 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4239
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3439 [chan receive, 28 minutes]:
testing.(*T).Run(0x4001822a80, {0x296ec91?, 0x0?}, 0x4001922180)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001822a80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001822a80, 0x4004ee2240)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3435
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3188 [chan receive, 32 minutes]:
testing.(*T).Run(0x4001634000, {0x296d81f?, 0x3dc835c13ad?}, 0x40004f9c20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x4001634000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x4001634000, 0x339bbf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6002 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001b21610, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b21600)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001aadc80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4004fb4688?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x4001430df8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x4001320f38, {0x369e620, 0x40016774a0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e620?, 0x40016774a0?}, 0x0?, 0x36e6718?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001bb51d0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5999
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 167 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4004ee2650, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004ee2640)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f0b020)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400049f2d0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x40006dcf38, {0x369e620, 0x40012dc960}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f43d0?, {0x369e620?, 0x40012dc960?}, 0xb0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40008752a0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 184
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1256 [select, 108 minutes]:
net/http.(*persistConn).readLoop(0x4001aa99e0)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1247
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 184 [chan receive, 116 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4004f0b020, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 160
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 659 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0xffff58949200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001308580?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001308580)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001308580)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4004ee2140)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4004ee2140)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004cc900, {0x36d4100, 0x4004ee2140})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004cc900)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 657
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 1136 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0x4001a02c00, 0x40019c7260)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1135
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 169 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 168
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3741 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x400076a950, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400076a940)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001758b40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400144cb60?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x4001b286a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x4001334f38, {0x369e620, 0x4001c440c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001b287a8?, {0x369e620?, 0x4001c440c0?}, 0x30?, 0x40002e9650?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000874410, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3749
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1199 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0x4001a4f200, 0x4001a52af0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1198
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4235 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001610c00, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4233
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3748 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x4001746d80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3728
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1878 [chan send, 78 minutes]:
os/exec.(*Cmd).watchCtx(0x4000694780, 0x4001616d90)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1877
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1469 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1468
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5292 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x4000694600?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1264 [IO wait, 108 minutes]:
internal/poll.runtime_pollWait(0xffff5861ea00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001923480?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001923480)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001923480)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001ab9c00)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001ab9c00)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001a4a200, {0x36d4100, 0x4001ab9c00})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001a4a200)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1262
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3638 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x10, 0x40006eb0e8, 0x4, 0x4000643440, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x40006eb248?, 0x1929a0?, 0x4001541ae8?, 0x1?, 0x40017b6b40?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4001689e00)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0xffff9f535108?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4001785b00)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4001785b00)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
os/exec.(*Cmd).CombinedOutput(0x4001785b00)
	/usr/local/go/src/os/exec/exec.go:1039 +0x7c
k8s.io/minikube/test/integration.debugLogs(0x4001656c40, {0x4001b04da0, 0xd})
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:334 +0xc4c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001656c40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:211 +0x980
testing.tRunner(0x4001656c40, 0x4001922680)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3510
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5998 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x4004f036c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5997
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1171 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0x4001ac1200, 0x4001aa2f50)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 778
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5999 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001aadc80, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5997
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4027 [chan receive, 25 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001d79140, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4022
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3572 [chan receive, 32 minutes]:
testing.(*testState).waitParallel(0x4000696780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001823dc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001823dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001823dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001823dc0, 0x4001308980)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3510
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3749 [chan receive, 30 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001758b40, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3728
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4030 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4004ee3390, 0x15)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004ee3380)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001d79140)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40000a1688?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x400144c928?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x400131cf38, {0x369e620, 0x400158a900}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e620?, 0x400158a900?}, 0x0?, 0x36e6718?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016f3dd0, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4027
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3573 [chan receive, 32 minutes]:
testing.(*testState).waitParallel(0x4000696780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001634380)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001634380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001634380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001634380, 0x4001308a00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3510
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 887 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x4001402a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 909
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 888 [chan receive, 108 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006f9da0, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 909
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3637 [chan receive, 32 minutes]:
testing.(*testState).waitParallel(0x4000696780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016568c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016568c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016568c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016568c0, 0x4001922580)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3510
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6003 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x4004fb5f40, 0x4004fb5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0x0?, 0x4004fb5f40, 0x4004fb5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x36e6718?, 0x4001506310?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001506230?, 0x0?, 0x40000823f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5999
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6071 [IO wait]:
internal/poll.runtime_pollWait(0xffff5861f400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40017b2960?, 0x40015b9800?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40017b2960, {0x40015b9800, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400190c940, {0x40015b9800?, 0x400134ad68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40006b4a50, {0x369c9e8, 0x40018bea10})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cbe0, 0x40006b4a50}, {0x369c9e8, 0x40018bea10}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400190c940?, {0x369cbe0, 0x40006b4a50})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400190c940, {0x369cbe0, 0x40006b4a50})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cbe0, 0x40006b4a50}, {0x369ca68, 0x400190c940}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001656c40?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 3638
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 1985 [chan send, 78 minutes]:
os/exec.(*Cmd).watchCtx(0x4001342900, 0x4001d6c770)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1453
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5293 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001610660, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3435 [chan receive, 5 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001822380, 0x339be20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3252
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1257 [select, 108 minutes]:
net/http.(*persistConn).writeLoop(0x4001aa99e0)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1247
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 917 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x40000a1f40, 0x40012e3f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0x68?, 0x40000a1f40, 0x40000a1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400141e180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 888
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3510 [chan receive, 5 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001822fc0, 0x40004f9c20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3188
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 916 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4004ee3490, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004ee3480)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006f9da0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400144cc40?, 0x2d6562756b696e69?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x3520202020333130?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x4001321f38, {0x369e620, 0x40016edb60}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x6464612d6562756b?, {0x369e620?, 0x40016edb60?}, 0x75?, 0x6550203a65746174?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016cc430, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 888
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 918 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 917
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1467 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x4001b21d90, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b21d80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001645080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40003e95e0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x4001523f38, {0x369e620, 0x4001db8a20}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f43d0?, {0x369e620?, 0x4001db8a20?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000875960, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1536
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1468 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x4004fbb740, 0x40012e1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0x84?, 0x4004fbb740, 0x4004fbb788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x4001bfc600?, 0x4001d86780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000695c80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1536
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3838 [chan receive, 3 minutes]:
testing.(*T).Run(0x4004f02000, {0x2994331?, 0x6ee?}, 0x400172e000)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4004f02000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4004f02000, 0x4001922180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3439
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3571 [chan receive, 32 minutes]:
testing.(*testState).waitParallel(0x4000696780)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001823c00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001823c00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001823c00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001823c00, 0x4001308900)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3510
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4238 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4004ee2550, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4004ee2540)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001610c00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40003e9490?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x40006eff38, {0x369e620, 0x40015179b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f43d0?, {0x369e620?, 0x40015179b0?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001bb5210, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4235
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1535 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x40012f1340?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1534
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1536 [chan receive, 80 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001645080, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1534
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6004 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6003
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3252 [chan receive, 35 minutes]:
testing.(*T).Run(0x40016348c0, {0x296d81f?, 0x4001323f58?}, 0x339be20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x40016348c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x40016348c0, 0x339bc38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4026 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x4001746d80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4022
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4234 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x4001402a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4233
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6006 [IO wait]:
internal/poll.runtime_pollWait(0xffff5861f000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001309c00?, 0x400154c000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001309c00, {0x400154c000, 0x6000, 0x6000})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
net.(*netFD).Read(0x4001309c00, {0x400154c000?, 0x400154c000?, 0x5?})
	/usr/local/go/src/net/fd_posix.go:68 +0x28
net.(*conn).Read(0x400190c510, {0x400154c000?, 0x40015248a8?, 0x8b27c?})
	/usr/local/go/src/net/net.go:196 +0x34
crypto/tls.(*atLeastReader).Read(0x400146dd58, {0x400154c000?, 0x4001524908?, 0x2cbb64?})
	/usr/local/go/src/crypto/tls/conn.go:816 +0x38
bytes.(*Buffer).ReadFrom(0x400133a9a8, {0x369ed40, 0x400146dd58})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
crypto/tls.(*Conn).readFromUntil(0x400133a708, {0xffff586cf318, 0x40014e1b90}, 0x40015249b0?)
	/usr/local/go/src/crypto/tls/conn.go:838 +0xcc
crypto/tls.(*Conn).readRecordOrCCS(0x400133a708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:627 +0x340
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:589
crypto/tls.(*Conn).Read(0x400133a708, {0x40014ce000, 0x1000, 0x4000000000?})
	/usr/local/go/src/crypto/tls/conn.go:1392 +0x14c
bufio.(*Reader).Read(0x40016118c0, {0x40004f6d64, 0x9, 0x542a60?})
	/usr/local/go/src/bufio/bufio.go:245 +0x188
io.ReadAtLeast({0x369cc80, 0x40016118c0}, {0x40004f6d64, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x98
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0x40004f6d64, 0x9, 0x4000000025?}, {0x369cc80?, 0x40016118c0?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:242 +0x58
golang.org/x/net/http2.(*Framer).ReadFrameHeader(0x40004f6d20)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:505 +0x60
golang.org/x/net/http2.(*Framer).ReadFrame(0x40004f6d20)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/frame.go:564 +0x20
golang.org/x/net/http2.(*clientConnReadLoop).run(0x4001524f98)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2208 +0xb8
golang.org/x/net/http2.(*ClientConn).readLoop(0x4001635dc0)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:2077 +0x4c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6005
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.47.0/http2/transport.go:866 +0xa90

                                                
                                                
goroutine 5313 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x4001b21a90, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b21a80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001610660)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001616230?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x4001b29ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x40000d2f38, {0x369e620, 0x4004edede0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001b29fa8?, {0x369e620?, 0x4004edede0?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004e9cc20, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5293
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4239 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x40000a1740, 0x40006dff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0xa0?, 0x40000a1740, 0x40000a1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x4001342a80?, 0x400197c500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x400011e600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4235
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4031 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x400134e740, 0x4001521f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0x0?, 0x400134e740, 0x400134e788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x36e6718?, 0x4004ead9d0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4004ead880?, 0x0?, 0x400011ed80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4027
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1895 [chan send, 78 minutes]:
os/exec.(*Cmd).watchCtx(0x400011f080, 0x400144cd20)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1894
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5654 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x4001b20790, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001b20780)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4004f0ad80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001a68150?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e6ab0?, 0x40000823f0?}, 0x400134ceb8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e6ab0, 0x40000823f0}, 0x40006def38, {0x369e620, 0x400087d650}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e620?, 0x400087d650?}, 0x1?, 0x36e6718?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000794f50, 0x3b9aca00, 0x0, 0x1, 0x40000823f0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5638
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4032 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4031
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5503 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e66a8, 0x40015c7770}, {0x36d4760, 0x4004ea0940}, 0x1, 0x0, 0x4001447b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6718?, 0x400023aa10?}, 0x3b9aca00, 0x4001447d28?, 0x1, 0x4001447b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6718, 0x400023aa10}, 0x40006e61c0, {0x4001541338, 0x11}, {0x29942e1, 0x14}, {0x29ac250, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36e6718, 0x400023aa10}, 0x40006e61c0, {0x4001541338, 0x11}, {0x29787f9?, 0x20d6cc2b00161e84?}, {0x693a22ac?, 0x40006dbf58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:285 +0xd4
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40006e61c0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40006e61c0, 0x400172e000)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3838
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5637 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff760, {{0x36f43d0, 0x40001bc080?}, 0x40006e68c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5636
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5315 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5314
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5314 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x4004fb6740, 0x4004fb6788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0x78?, 0x4004fb6740, 0x4004fb6788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40013e1800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5293
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3742 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x4001336f40, 0x4001336f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0x50?, 0x4001336f40, 0x4001336f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x36e6718?, 0x4001aa3c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001747500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3749
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5655 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e6ab0, 0x40000823f0}, 0x400134ff40, 0x400134ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e6ab0, 0x40000823f0}, 0x9c?, 0x400134ff40, 0x400134ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e6ab0?, 0x40000823f0?}, 0x0?, 0x400134ff50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f43d0?, 0x40001bc080?, 0x40006e68c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5638
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5656 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5655
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                    

Test pass (240/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 38.06
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 30.73
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 29.44
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.63
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 169.42
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 9.86
57 TestAddons/StoppedEnableDisable 12.42
58 TestCertOptions 39.67
59 TestCertExpiration 243.43
61 TestForceSystemdFlag 37.24
62 TestForceSystemdEnv 40.62
67 TestErrorSpam/setup 32.59
68 TestErrorSpam/start 0.83
69 TestErrorSpam/status 1.25
70 TestErrorSpam/pause 6.9
71 TestErrorSpam/unpause 5.05
72 TestErrorSpam/stop 1.5
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 78.43
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 40.82
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
84 TestFunctional/serial/CacheCmd/cache/add_local 1.29
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 40.33
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.44
95 TestFunctional/serial/LogsFileCmd 1.49
96 TestFunctional/serial/InvalidService 4.57
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 14.08
100 TestFunctional/parallel/DryRun 0.46
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 1.25
106 TestFunctional/parallel/ServiceCmdConnect 8.67
107 TestFunctional/parallel/AddonsCmd 0.2
108 TestFunctional/parallel/PersistentVolumeClaim 23.22
110 TestFunctional/parallel/SSHCmd 0.56
111 TestFunctional/parallel/CpCmd 2.16
113 TestFunctional/parallel/FileSync 0.39
114 TestFunctional/parallel/CertSync 2.12
118 TestFunctional/parallel/NodeLabels 0.09
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.81
122 TestFunctional/parallel/License 0.58
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 1.1
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.54
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.46
130 TestFunctional/parallel/ImageCommands/Setup 0.74
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
136 TestFunctional/parallel/ServiceCmd/DeployApp 6.3
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.35
147 TestFunctional/parallel/ServiceCmd/List 0.45
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
150 TestFunctional/parallel/ServiceCmd/Format 0.49
151 TestFunctional/parallel/ServiceCmd/URL 0.39
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
159 TestFunctional/parallel/ProfileCmd/profile_list 0.41
160 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
161 TestFunctional/parallel/MountCmd/any-port 7.24
162 TestFunctional/parallel/MountCmd/specific-port 2.54
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.72
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.14
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.84
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.11
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.96
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.97
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.21
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.13
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.73
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.31
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.27
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.7
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.54
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.28
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.62
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.92
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.89
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.3
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.23
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.85
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.37
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.53
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.75
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.41
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 194.54
265 TestMultiControlPlane/serial/DeployApp 7.46
266 TestMultiControlPlane/serial/PingHostFromPods 1.5
267 TestMultiControlPlane/serial/AddWorkerNode 60.09
268 TestMultiControlPlane/serial/NodeLabels 0.1
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
270 TestMultiControlPlane/serial/CopyFile 20.25
271 TestMultiControlPlane/serial/StopSecondaryNode 12.88
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
273 TestMultiControlPlane/serial/RestartSecondaryNode 27.31
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.04
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 129.02
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.14
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
278 TestMultiControlPlane/serial/StopCluster 36.19
279 TestMultiControlPlane/serial/RestartCluster 75.4
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.85
281 TestMultiControlPlane/serial/AddSecondaryNode 82.5
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
287 TestJSONOutput/start/Command 77.32
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.85
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 61.65
313 TestKicCustomNetwork/use_default_bridge_network 35.57
314 TestKicExistingNetwork 36.15
315 TestKicCustomSubnet 34.24
316 TestKicStaticIP 35.15
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 69.07
321 TestMountStart/serial/StartWithMountFirst 9.17
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.71
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.71
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 8.41
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 138.9
333 TestMultiNode/serial/DeployApp2Nodes 4.7
334 TestMultiNode/serial/PingHostFrom2Pods 0.9
335 TestMultiNode/serial/AddNode 57.87
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.72
338 TestMultiNode/serial/CopyFile 10.58
339 TestMultiNode/serial/StopNode 2.45
340 TestMultiNode/serial/StartAfterStop 8.18
341 TestMultiNode/serial/RestartKeepsNodes 81.86
342 TestMultiNode/serial/DeleteNode 5.63
343 TestMultiNode/serial/StopMultiNode 23.98
344 TestMultiNode/serial/RestartMultiNode 51.73
345 TestMultiNode/serial/ValidateNameConflict 37.38
350 TestPreload 124.28
352 TestScheduledStopUnix 104.62
355 TestInsufficientStorage 12.68
356 TestRunningBinaryUpgrade 299.46
359 TestMissingContainerUpgrade 123.63
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 45.21
363 TestNoKubernetes/serial/StartWithStopK8s 7.47
364 TestNoKubernetes/serial/Start 9.59
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.45
367 TestNoKubernetes/serial/ProfileList 3.92
368 TestNoKubernetes/serial/Stop 1.42
369 TestNoKubernetes/serial/StartNoArgs 7.37
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
371 TestStoppedBinaryUpgrade/Setup 2.08
372 TestStoppedBinaryUpgrade/Upgrade 303.29
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.74
382 TestPause/serial/Start 81.65
383 TestPause/serial/SecondStartNoReconfiguration 28.03
x
+
TestDownloadOnly/v1.28.0/json-events (38.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-838700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-838700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (38.060909294s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (38.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 23:51:15.927364    4875 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 23:51:15.927443    4875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-838700
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-838700: exit status 85 (89.480826ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-838700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-838700 │ jenkins │ v1.37.0 │ 10 Dec 25 23:50 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:50:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:50:37.912397    4881 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:50:37.912593    4881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:50:37.912619    4881 out.go:374] Setting ErrFile to fd 2...
	I1210 23:50:37.912642    4881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:50:37.912927    4881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	W1210 23:50:37.913103    4881 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22061-2739/.minikube/config/config.json: open /home/jenkins/minikube-integration/22061-2739/.minikube/config/config.json: no such file or directory
	I1210 23:50:37.913579    4881 out.go:368] Setting JSON to true
	I1210 23:50:37.914369    4881 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":124,"bootTime":1765410514,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 23:50:37.914460    4881 start.go:143] virtualization:  
	I1210 23:50:37.920398    4881 out.go:99] [download-only-838700] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 23:50:37.920650    4881 notify.go:221] Checking for updates...
	W1210 23:50:37.920591    4881 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 23:50:37.924088    4881 out.go:171] MINIKUBE_LOCATION=22061
	I1210 23:50:37.927602    4881 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:50:37.930891    4881 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1210 23:50:37.934103    4881 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1210 23:50:37.937302    4881 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 23:50:37.943402    4881 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 23:50:37.943676    4881 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:50:37.976769    4881 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 23:50:37.976867    4881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:50:38.389270    4881 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 23:50:38.380216268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:50:38.389363    4881 docker.go:319] overlay module found
	I1210 23:50:38.392466    4881 out.go:99] Using the docker driver based on user configuration
	I1210 23:50:38.392495    4881 start.go:309] selected driver: docker
	I1210 23:50:38.392508    4881 start.go:927] validating driver "docker" against <nil>
	I1210 23:50:38.392619    4881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:50:38.446925    4881 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 23:50:38.438377264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:50:38.447087    4881 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:50:38.447380    4881 start_flags.go:425] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 23:50:38.447572    4881 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 23:50:38.450690    4881 out.go:171] Using Docker driver with root privileges
	I1210 23:50:38.453771    4881 cni.go:84] Creating CNI manager for ""
	I1210 23:50:38.453835    4881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:50:38.453851    4881 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:50:38.453924    4881 start.go:353] cluster config:
	{Name:download-only-838700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-838700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:50:38.456866    4881 out.go:99] Starting "download-only-838700" primary control-plane node in "download-only-838700" cluster
	I1210 23:50:38.456885    4881 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:50:38.459788    4881 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:50:38.459827    4881 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 23:50:38.459970    4881 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:50:38.476463    4881 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 23:50:38.476634    4881 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 23:50:38.476732    4881 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 23:50:38.515459    4881 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1210 23:50:38.515492    4881 cache.go:65] Caching tarball of preloaded images
	I1210 23:50:38.515656    4881 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 23:50:38.519093    4881 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 23:50:38.519118    4881 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1210 23:50:38.616738    4881 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1210 23:50:38.616865    4881 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1210 23:50:45.091175    4881 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	
	
	* The control-plane node download-only-838700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-838700"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-838700
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (30.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-887652 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-887652 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (30.727446169s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (30.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1210 23:51:47.096647    4875 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 23:51:47.096682    4875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-887652
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-887652: exit status 85 (90.13291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-838700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-838700 │ jenkins │ v1.37.0 │ 10 Dec 25 23:50 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ delete  │ -p download-only-838700                                                                                                                                                   │ download-only-838700 │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ start   │ -o=json --download-only -p download-only-887652 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-887652 │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:51:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:51:16.410747    5079 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:51:16.410919    5079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:51:16.410948    5079 out.go:374] Setting ErrFile to fd 2...
	I1210 23:51:16.410995    5079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:51:16.411293    5079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:51:16.411734    5079 out.go:368] Setting JSON to true
	I1210 23:51:16.412500    5079 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":163,"bootTime":1765410514,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 23:51:16.412596    5079 start.go:143] virtualization:  
	I1210 23:51:16.416071    5079 out.go:99] [download-only-887652] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 23:51:16.416379    5079 notify.go:221] Checking for updates...
	I1210 23:51:16.420255    5079 out.go:171] MINIKUBE_LOCATION=22061
	I1210 23:51:16.423247    5079 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:51:16.426218    5079 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1210 23:51:16.429197    5079 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1210 23:51:16.432181    5079 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 23:51:16.437796    5079 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 23:51:16.438081    5079 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:51:16.460123    5079 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 23:51:16.460224    5079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:51:16.530594    5079 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:51:16.519672701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:51:16.530697    5079 docker.go:319] overlay module found
	I1210 23:51:16.533690    5079 out.go:99] Using the docker driver based on user configuration
	I1210 23:51:16.533736    5079 start.go:309] selected driver: docker
	I1210 23:51:16.533749    5079 start.go:927] validating driver "docker" against <nil>
	I1210 23:51:16.533867    5079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:51:16.590609    5079 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:51:16.581746099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:51:16.590771    5079 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:51:16.591087    5079 start_flags.go:425] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 23:51:16.591252    5079 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 23:51:16.594411    5079 out.go:171] Using Docker driver with root privileges
	I1210 23:51:16.597311    5079 cni.go:84] Creating CNI manager for ""
	I1210 23:51:16.597381    5079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:51:16.597395    5079 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:51:16.597475    5079 start.go:353] cluster config:
	{Name:download-only-887652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-887652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socke
tVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:51:16.600450    5079 out.go:99] Starting "download-only-887652" primary control-plane node in "download-only-887652" cluster
	I1210 23:51:16.600476    5079 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:51:16.603288    5079 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:51:16.603341    5079 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:51:16.603518    5079 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:51:16.619494    5079 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 23:51:16.619626    5079 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 23:51:16.619655    5079 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 23:51:16.619668    5079 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 23:51:16.619675    5079 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 23:51:16.653689    5079 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1210 23:51:16.653719    5079 cache.go:65] Caching tarball of preloaded images
	I1210 23:51:16.653900    5079 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:51:16.657197    5079 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1210 23:51:16.657230    5079 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1210 23:51:16.762397    5079 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1210 23:51:16.762455    5079 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1210 23:51:46.329657    5079 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 23:51:46.330095    5079 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/download-only-887652/config.json ...
	I1210 23:51:46.330132    5079 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/download-only-887652/config.json: {Name:mkc79ddaec4ca48b899119ab8d065b7b5c9c167d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 23:51:46.330322    5079 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 23:51:46.330498    5079 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22061-2739/.minikube/cache/linux/arm64/v1.34.2/kubectl
	
	
	* The control-plane node download-only-887652 host does not exist
	  To start a cluster, run: "minikube start -p download-only-887652"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-887652
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (29.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-669413 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-669413 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (29.439907801s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (29.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1210 23:52:16.971912    4875 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1210 23:52:16.971948    4875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-669413
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-669413: exit status 85 (81.736968ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-838700 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-838700 │ jenkins │ v1.37.0 │ 10 Dec 25 23:50 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ delete  │ -p download-only-838700                                                                                                                                                          │ download-only-838700 │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ start   │ -o=json --download-only -p download-only-887652 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-887652 │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ delete  │ -p download-only-887652                                                                                                                                                          │ download-only-887652 │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │ 10 Dec 25 23:51 UTC │
	│ start   │ -o=json --download-only -p download-only-669413 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-669413 │ jenkins │ v1.37.0 │ 10 Dec 25 23:51 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 23:51:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 23:51:47.576441    5278 out.go:360] Setting OutFile to fd 1 ...
	I1210 23:51:47.576619    5278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:51:47.576650    5278 out.go:374] Setting ErrFile to fd 2...
	I1210 23:51:47.576673    5278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 23:51:47.576961    5278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1210 23:51:47.577448    5278 out.go:368] Setting JSON to true
	I1210 23:51:47.578217    5278 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":194,"bootTime":1765410514,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 23:51:47.578315    5278 start.go:143] virtualization:  
	I1210 23:51:47.581886    5278 out.go:99] [download-only-669413] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 23:51:47.582119    5278 notify.go:221] Checking for updates...
	I1210 23:51:47.585955    5278 out.go:171] MINIKUBE_LOCATION=22061
	I1210 23:51:47.589255    5278 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 23:51:47.592232    5278 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1210 23:51:47.595171    5278 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1210 23:51:47.598143    5278 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 23:51:47.604067    5278 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 23:51:47.604378    5278 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 23:51:47.641237    5278 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 23:51:47.641341    5278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:51:47.699630    5278 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:51:47.690404133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:51:47.699742    5278 docker.go:319] overlay module found
	I1210 23:51:47.702747    5278 out.go:99] Using the docker driver based on user configuration
	I1210 23:51:47.702799    5278 start.go:309] selected driver: docker
	I1210 23:51:47.702808    5278 start.go:927] validating driver "docker" against <nil>
	I1210 23:51:47.702927    5278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 23:51:47.767168    5278 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 23:51:47.758747742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 23:51:47.767321    5278 start_flags.go:342] no existing cluster config was found, will generate one from the flags 
	I1210 23:51:47.767574    5278 start_flags.go:425] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 23:51:47.767735    5278 start_flags.go:1113] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 23:51:47.770918    5278 out.go:171] Using Docker driver with root privileges
	I1210 23:51:47.773924    5278 cni.go:84] Creating CNI manager for ""
	I1210 23:51:47.773987    5278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 23:51:47.774001    5278 start_flags.go:351] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 23:51:47.774081    5278 start.go:353] cluster config:
	{Name:download-only-669413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-669413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: IPv6: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 23:51:47.776904    5278 out.go:99] Starting "download-only-669413" primary control-plane node in "download-only-669413" cluster
	I1210 23:51:47.776928    5278 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 23:51:47.779688    5278 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 23:51:47.779725    5278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:51:47.779780    5278 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 23:51:47.795718    5278 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 23:51:47.795835    5278 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 23:51:47.795860    5278 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 23:51:47.795871    5278 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 23:51:47.795878    5278 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 23:51:47.834811    5278 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 23:51:47.834846    5278 cache.go:65] Caching tarball of preloaded images
	I1210 23:51:47.835025    5278 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 23:51:47.838095    5278 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1210 23:51:47.838119    5278 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1210 23:51:47.938110    5278 preload.go:295] Got checksum from GCS API "e7da2fb676059c00535073e4a61150f1"
	I1210 23:51:47.938180    5278 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e7da2fb676059c00535073e4a61150f1 -> /home/jenkins/minikube-integration/22061-2739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-669413 host does not exist
	  To start a cluster, run: "minikube start -p download-only-669413"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-669413
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 23:52:18.283320    4875 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-844379 --alsologtostderr --binary-mirror http://127.0.0.1:33593 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-844379" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-844379
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-903947
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-903947: exit status 85 (72.772617ms)

                                                
                                                
-- stdout --
	* Profile "addons-903947" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903947"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-903947
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-903947: exit status 85 (75.948952ms)

                                                
                                                
-- stdout --
	* Profile "addons-903947" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903947"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (169.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-903947 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-903947 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.422301815s)
--- PASS: TestAddons/Setup (169.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-903947 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-903947 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-903947 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-903947 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [be200ed0-5d73-4ca5-a017-389e615081d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [be200ed0-5d73-4ca5-a017-389e615081d5] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00335817s
addons_test.go:696: (dbg) Run:  kubectl --context addons-903947 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-903947 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-903947 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-903947 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-903947
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-903947: (12.158815739s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-903947
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-903947
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-903947
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestCertOptions (39.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-162804 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-162804 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.812908646s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-162804 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-162804 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-162804 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-162804" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-162804
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-162804: (2.108771942s)
--- PASS: TestCertOptions (39.67s)

                                                
                                    
x
+
TestCertExpiration (243.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-408797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-408797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.871011216s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-408797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-408797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.31635094s)
helpers_test.go:176: Cleaning up "cert-expiration-408797" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-408797
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-408797: (3.244874245s)
--- PASS: TestCertExpiration (243.43s)

                                                
                                    
x
+
TestForceSystemdFlag (37.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-097163 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-097163 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.984116632s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-097163 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-097163" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-097163
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-097163: (2.815854509s)
--- PASS: TestForceSystemdFlag (37.24s)

                                                
                                    
x
+
TestForceSystemdEnv (40.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-378272 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-378272 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.669459619s)
helpers_test.go:176: Cleaning up "force-systemd-env-378272" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-378272
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-378272: (2.950098713s)
--- PASS: TestForceSystemdEnv (40.62s)

                                                
                                    
x
+
TestErrorSpam/setup (32.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-433958 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-433958 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-433958 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-433958 --driver=docker  --container-runtime=crio: (32.594716639s)
--- PASS: TestErrorSpam/setup (32.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 status
--- PASS: TestErrorSpam/status (1.25s)

                                                
                                    
x
+
TestErrorSpam/pause (6.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause: exit status 80 (2.487792189s)

                                                
                                                
-- stdout --
	* Pausing node nospam-433958 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:59:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause: exit status 80 (2.330789035s)

                                                
                                                
-- stdout --
	* Pausing node nospam-433958 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:59:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause: exit status 80 (2.081081011s)

                                                
                                                
-- stdout --
	* Pausing node nospam-433958 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:59:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause: exit status 80 (1.551181304s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-433958 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:59:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause: exit status 80 (1.901565468s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-433958 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:59:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause: exit status 80 (1.600891836s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-433958 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T23:59:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.05s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 stop: (1.304182381s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-433958 --log_dir /tmp/nospam-433958 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-976823 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1211 00:00:09.649820    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:09.659840    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:09.671904    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:09.693380    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:09.734834    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:09.816237    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:09.977911    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:10.299525    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:10.941633    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:12.223319    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:14.785020    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:19.906810    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:00:30.148310    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-976823 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.433215052s)
--- PASS: TestFunctional/serial/StartWithProxy (78.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1211 00:00:39.733930    4875 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-976823 --alsologtostderr -v=8
E1211 00:00:50.630004    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-976823 --alsologtostderr -v=8: (40.811907454s)
functional_test.go:678: soft start took 40.822509051s for "functional-976823" cluster.
I1211 00:01:20.546566    4875 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (40.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-976823 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 cache add registry.k8s.io/pause:3.1: (1.18921552s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 cache add registry.k8s.io/pause:3.3: (1.218089163s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 cache add registry.k8s.io/pause:latest: (1.127633982s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-976823 /tmp/TestFunctionalserialCacheCmdcacheadd_local1818491895/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cache add minikube-local-cache-test:functional-976823
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cache delete minikube-local-cache-test:functional-976823
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-976823
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.679041ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 kubectl -- --context functional-976823 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-976823 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-976823 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1211 00:01:31.592246    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-976823 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.318421345s)
functional_test.go:776: restart took 40.318534759s for "functional-976823" cluster.
I1211 00:02:08.552547    4875 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (40.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-976823 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 logs: (1.440894986s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 logs --file /tmp/TestFunctionalserialLogsFileCmd1353562385/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 logs --file /tmp/TestFunctionalserialLogsFileCmd1353562385/001/logs.txt: (1.491431429s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-976823 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-976823
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-976823: exit status 115 (410.220521ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32562 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-976823 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 config get cpus: exit status 14 (80.53906ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 config get cpus: exit status 14 (67.657109ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-976823 --alsologtostderr -v=1]
E1211 00:02:53.514517    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-976823 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 31394: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-976823 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-976823 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.965078ms)

                                                
                                                
-- stdout --
	* [functional-976823] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:02:52.704865   30801 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:02:52.704975   30801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:02:52.704992   30801 out.go:374] Setting ErrFile to fd 2...
	I1211 00:02:52.704997   30801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:02:52.705645   30801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:02:52.706071   30801 out.go:368] Setting JSON to false
	I1211 00:02:52.706903   30801 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":859,"bootTime":1765410514,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:02:52.707006   30801 start.go:143] virtualization:  
	I1211 00:02:52.710605   30801 out.go:179] * [functional-976823] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:02:52.713521   30801 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:02:52.713628   30801 notify.go:221] Checking for updates...
	I1211 00:02:52.719659   30801 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:02:52.722605   30801 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:02:52.725444   30801 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:02:52.728383   30801 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:02:52.731501   30801 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:02:52.734808   30801 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 00:02:52.735543   30801 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:02:52.757724   30801 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:02:52.757839   30801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:02:52.827927   30801 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-11 00:02:52.81842537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:02:52.828038   30801 docker.go:319] overlay module found
	I1211 00:02:52.831076   30801 out.go:179] * Using the docker driver based on existing profile
	I1211 00:02:52.833814   30801 start.go:309] selected driver: docker
	I1211 00:02:52.833834   30801 start.go:927] validating driver "docker" against &{Name:functional-976823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-976823 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:02:52.833963   30801 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:02:52.837459   30801 out.go:203] 
	W1211 00:02:52.840235   30801 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1211 00:02:52.842994   30801 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-976823 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-976823 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-976823 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (229.811214ms)

                                                
                                                
-- stdout --
	* [functional-976823] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:02:53.169436   30920 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:02:53.169662   30920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:02:53.169672   30920 out.go:374] Setting ErrFile to fd 2...
	I1211 00:02:53.169682   30920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:02:53.171342   30920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:02:53.171808   30920 out.go:368] Setting JSON to false
	I1211 00:02:53.172869   30920 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":860,"bootTime":1765410514,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:02:53.172971   30920 start.go:143] virtualization:  
	I1211 00:02:53.176112   30920 out.go:179] * [functional-976823] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1211 00:02:53.179838   30920 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:02:53.179920   30920 notify.go:221] Checking for updates...
	I1211 00:02:53.185531   30920 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:02:53.188617   30920 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:02:53.191502   30920 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:02:53.194644   30920 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:02:53.197596   30920 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:02:53.200891   30920 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 00:02:53.201476   30920 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:02:53.229335   30920 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:02:53.229457   30920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:02:53.318530   30920 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-11 00:02:53.303597256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:02:53.318653   30920 docker.go:319] overlay module found
	I1211 00:02:53.323036   30920 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1211 00:02:53.325875   30920 start.go:309] selected driver: docker
	I1211 00:02:53.325920   30920 start.go:927] validating driver "docker" against &{Name:functional-976823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-976823 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:02:53.326011   30920 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:02:53.329518   30920 out.go:203] 
	W1211 00:02:53.332432   30920 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1211 00:02:53.335406   30920 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-976823 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-976823 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-tw6t2" [24206674-050d-43a5-8b54-2e74533aa563] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-tw6t2" [24206674-050d-43a5-8b54-2e74533aa563] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.002710379s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31864
functional_test.go:1680: http://192.168.49.2:31864: success! body:
Request served by hello-node-connect-7d85dfc575-tw6t2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31864
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [b448aebf-9159-41ea-878a-6127ba6d4007] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003532681s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-976823 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-976823 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-976823 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-976823 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [28e7256c-e265-4c5b-bfa7-960411b5cf62] Pending
helpers_test.go:353: "sp-pod" [28e7256c-e265-4c5b-bfa7-960411b5cf62] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [28e7256c-e265-4c5b-bfa7-960411b5cf62] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003692986s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-976823 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-976823 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-976823 delete -f testdata/storage-provisioner/pod.yaml: (1.203543901s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-976823 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [be22b31b-1910-4efd-bfa0-7d2f4607f19a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [be22b31b-1910-4efd-bfa0-7d2f4607f19a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003591977s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-976823 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh -n functional-976823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cp functional-976823:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3971280473/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh -n functional-976823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh -n functional-976823 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4875/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo cat /etc/test/nested/copy/4875/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4875.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo cat /etc/ssl/certs/4875.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4875.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo cat /usr/share/ca-certificates/4875.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/48752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo cat /etc/ssl/certs/48752.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/48752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo cat /usr/share/ca-certificates/48752.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-976823 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 ssh "sudo systemctl is-active docker": exit status 1 (376.438479ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 ssh "sudo systemctl is-active containerd": exit status 1 (432.288556ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 version -o=json --components: (1.104139653s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-976823 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-976823
localhost/kicbase/echo-server:functional-976823
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-976823 image ls --format short --alsologtostderr:
I1211 00:03:01.175977   32388 out.go:360] Setting OutFile to fd 1 ...
I1211 00:03:01.176089   32388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:01.176095   32388 out.go:374] Setting ErrFile to fd 2...
I1211 00:03:01.176100   32388 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:01.176448   32388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:03:01.177478   32388 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:01.177620   32388 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:01.178340   32388 cli_runner.go:164] Run: docker container inspect functional-976823 --format={{.State.Status}}
I1211 00:03:01.201275   32388 ssh_runner.go:195] Run: systemctl --version
I1211 00:03:01.201347   32388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976823
I1211 00:03:01.232613   32388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-976823/id_rsa Username:docker}
I1211 00:03:01.343606   32388 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-976823 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-976823  │ 8c9ecd8b77aa0 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-976823  │ ce2d2cda2d858 │ 4.79MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ localhost/my-image                      │ functional-976823  │ 2ab383e6c39d8 │ 1.64MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-976823 image ls --format table --alsologtostderr:
I1211 00:03:06.700629   32926 out.go:360] Setting OutFile to fd 1 ...
I1211 00:03:06.700815   32926 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:06.700826   32926 out.go:374] Setting ErrFile to fd 2...
I1211 00:03:06.700832   32926 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:06.701136   32926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:03:06.701968   32926 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:06.702145   32926 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:06.702789   32926 cli_runner.go:164] Run: docker container inspect functional-976823 --format={{.State.Status}}
I1211 00:03:06.720977   32926 ssh_runner.go:195] Run: systemctl --version
I1211 00:03:06.721100   32926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976823
I1211 00:03:06.744981   32926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-976823/id_rsa Username:docker}
I1211 00:03:06.850906   32926 ssh_runner.go:195] Run: sudo crictl images --output json
2025/12/11 00:03:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-976823 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ce2d2cda2d858fdaea84129deb86d
18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-976823"],"size":"4788229"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0
f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8c9ecd8b77aa03110e45ef538aaae6acc6b3c9527b835d1084135db03c1ee7c9","repoDigests":["localhost/minikube-local-cache-test@sha256
:10335fb1718de8db4c544806630b89ba3c448a5064d78a291c59ee7ed866ffac"],"repoTags":["localhost/minikube-local-cache-test:functional-976823"],"size":"3330"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721d
dbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k
8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c975
00f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-976823 image ls --format json --alsologtostderr:
I1211 00:03:01.708197   32437 out.go:360] Setting OutFile to fd 1 ...
I1211 00:03:01.708389   32437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:01.708417   32437 out.go:374] Setting ErrFile to fd 2...
I1211 00:03:01.708437   32437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:01.708829   32437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:03:01.709836   32437 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:01.710017   32437 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:01.710621   32437 cli_runner.go:164] Run: docker container inspect functional-976823 --format={{.State.Status}}
I1211 00:03:01.730909   32437 ssh_runner.go:195] Run: systemctl --version
I1211 00:03:01.730960   32437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976823
I1211 00:03:01.762687   32437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-976823/id_rsa Username:docker}
I1211 00:03:01.893626   32437 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-976823 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-976823
size: "4788229"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 2ab383e6c39d83132a3208bdf60955a49d5c0f65df2a2ca14b0649c8d5bc1b53
repoDigests:
- localhost/my-image@sha256:4726864717a72bbd4a56602f51f7d4b9f1927380e0690c38674e215b94bebb15
repoTags:
- localhost/my-image:functional-976823
size: "1640791"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 6bb6982d323dca8cca0863f488bf923865ace959a3c6264ea6d35286eb7e0029
repoDigests:
- docker.io/library/3974d5242f7b9b257c4220e5af8fd8e1670b390a2a39871c62e2e3e899380619-tmp@sha256:49d062f418d74a7cacb496da72528ce9db21778c548e72e34b4f331f814bcfb7
repoTags: []
size: "1638179"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8c9ecd8b77aa03110e45ef538aaae6acc6b3c9527b835d1084135db03c1ee7c9
repoDigests:
- localhost/minikube-local-cache-test@sha256:10335fb1718de8db4c544806630b89ba3c448a5064d78a291c59ee7ed866ffac
repoTags:
- localhost/minikube-local-cache-test:functional-976823
size: "3330"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-976823 image ls --format yaml --alsologtostderr:
I1211 00:03:06.453217   32887 out.go:360] Setting OutFile to fd 1 ...
I1211 00:03:06.453359   32887 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:06.453372   32887 out.go:374] Setting ErrFile to fd 2...
I1211 00:03:06.453377   32887 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:06.453628   32887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:03:06.454269   32887 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:06.454429   32887 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:06.455058   32887 cli_runner.go:164] Run: docker container inspect functional-976823 --format={{.State.Status}}
I1211 00:03:06.473685   32887 ssh_runner.go:195] Run: systemctl --version
I1211 00:03:06.473738   32887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976823
I1211 00:03:06.495279   32887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-976823/id_rsa Username:docker}
I1211 00:03:06.605999   32887 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 ssh pgrep buildkitd: exit status 1 (426.590709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr: (3.80073751s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6bb6982d323
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-976823
--> 2ab383e6c39
Successfully tagged localhost/my-image:functional-976823
2ab383e6c39d83132a3208bdf60955a49d5c0f65df2a2ca14b0649c8d5bc1b53
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-976823 image build -t localhost/my-image:functional-976823 testdata/build --alsologtostderr:
I1211 00:03:02.448872   32556 out.go:360] Setting OutFile to fd 1 ...
I1211 00:03:02.449196   32556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:02.449227   32556 out.go:374] Setting ErrFile to fd 2...
I1211 00:03:02.449264   32556 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:03:02.449610   32556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:03:02.450497   32556 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:02.452720   32556 config.go:182] Loaded profile config "functional-976823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1211 00:03:02.453404   32556 cli_runner.go:164] Run: docker container inspect functional-976823 --format={{.State.Status}}
I1211 00:03:02.480822   32556 ssh_runner.go:195] Run: systemctl --version
I1211 00:03:02.480876   32556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-976823
I1211 00:03:02.509587   32556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-976823/id_rsa Username:docker}
I1211 00:03:02.631240   32556 build_images.go:162] Building image from path: /tmp/build.3709381140.tar
I1211 00:03:02.631381   32556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1211 00:03:02.643453   32556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3709381140.tar
I1211 00:03:02.653228   32556 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3709381140.tar: stat -c "%s %y" /var/lib/minikube/build/build.3709381140.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3709381140.tar': No such file or directory
I1211 00:03:02.653314   32556 ssh_runner.go:362] scp /tmp/build.3709381140.tar --> /var/lib/minikube/build/build.3709381140.tar (3072 bytes)
I1211 00:03:02.690370   32556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3709381140
I1211 00:03:02.700228   32556 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3709381140 -xf /var/lib/minikube/build/build.3709381140.tar
I1211 00:03:02.711193   32556 crio.go:315] Building image: /var/lib/minikube/build/build.3709381140
I1211 00:03:02.711333   32556 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-976823 /var/lib/minikube/build/build.3709381140 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1211 00:03:06.144718   32556 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-976823 /var/lib/minikube/build/build.3709381140 --cgroup-manager=cgroupfs: (3.433342537s)
I1211 00:03:06.144790   32556 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3709381140
I1211 00:03:06.152726   32556 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3709381140.tar
I1211 00:03:06.160655   32556 build_images.go:218] Built localhost/my-image:functional-976823 from /tmp/build.3709381140.tar
I1211 00:03:06.160688   32556 build_images.go:134] succeeded building to: functional-976823
I1211 00:03:06.160694   32556 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-976823
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image load --daemon kicbase/echo-server:functional-976823 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 image load --daemon kicbase/echo-server:functional-976823 --alsologtostderr: (1.246198963s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image load --daemon kicbase/echo-server:functional-976823 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-976823 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-976823 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-hrmmv" [28ae683d-2e88-4368-946f-a8c4ec988284] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-hrmmv" [28ae683d-2e88-4368-946f-a8c4ec988284] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00422652s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-976823
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image load --daemon kicbase/echo-server:functional-976823 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image save kicbase/echo-server:functional-976823 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image rm kicbase/echo-server:functional-976823 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-976823
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 image save --daemon kicbase/echo-server:functional-976823 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-976823
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-976823 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-976823 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-976823 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 29004: os: process already finished
helpers_test.go:526: unable to kill pid 28886: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-976823 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-976823 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-976823 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [29217159-c58f-43bc-90e2-c63fd5557ab0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [29217159-c58f-43bc-90e2-c63fd5557ab0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006818846s
I1211 00:02:36.037683    4875 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 service list -o json
functional_test.go:1504: Took "437.727091ms" to run "out/minikube-linux-arm64 -p functional-976823 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31843
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31843
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-976823 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.110.71 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-976823 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "360.248796ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.312329ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "372.655525ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "52.0738ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdany-port1378426828/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765411367642220675" to /tmp/TestFunctionalparallelMountCmdany-port1378426828/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765411367642220675" to /tmp/TestFunctionalparallelMountCmdany-port1378426828/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765411367642220675" to /tmp/TestFunctionalparallelMountCmdany-port1378426828/001/test-1765411367642220675
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (373.984857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 00:02:48.016518    4875 retry.go:31] will retry after 484.245225ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 11 00:02 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 11 00:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 11 00:02 test-1765411367642220675
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh cat /mount-9p/test-1765411367642220675
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-976823 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [1679c352-4e0a-4bb2-9ab4-c679fa28a265] Pending
helpers_test.go:353: "busybox-mount" [1679c352-4e0a-4bb2-9ab4-c679fa28a265] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [1679c352-4e0a-4bb2-9ab4-c679fa28a265] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [1679c352-4e0a-4bb2-9ab4-c679fa28a265] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.01320802s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-976823 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdany-port1378426828/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdspecific-port511295732/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (610.178058ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 00:02:55.490422    4875 retry.go:31] will retry after 659.346853ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdspecific-port511295732/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-976823 ssh "sudo umount -f /mount-9p": exit status 1 (368.629797ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-976823 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdspecific-port511295732/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T" /mount1: (1.037824145s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-976823 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-976823 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-976823 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3854699538/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-976823
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-976823
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-976823
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22061-2739/.minikube/files/etc/test/nested/copy/4875/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 cache add registry.k8s.io/pause:3.1: (1.227690908s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 cache add registry.k8s.io/pause:3.3: (1.360333712s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 cache add registry.k8s.io/pause:latest: (1.133666969s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach658614119/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cache add minikube-local-cache-test:functional-786978
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cache delete minikube-local-cache-test:functional-786978
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-786978
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.208412ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2455646345/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 config get cpus: exit status 14 (74.167802ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 config get cpus: exit status 14 (99.87083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (184.729298ms)

                                                
                                                
-- stdout --
	* [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:32:13.632619   62277 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:32:13.632934   62277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.632974   62277 out.go:374] Setting ErrFile to fd 2...
	I1211 00:32:13.632994   62277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.633260   62277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:32:13.633659   62277 out.go:368] Setting JSON to false
	I1211 00:32:13.634496   62277 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2620,"bootTime":1765410514,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:32:13.634591   62277 start.go:143] virtualization:  
	I1211 00:32:13.637883   62277 out.go:179] * [functional-786978] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1211 00:32:13.641587   62277 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:32:13.641681   62277 notify.go:221] Checking for updates...
	I1211 00:32:13.647316   62277 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:32:13.650301   62277 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:32:13.653091   62277 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:32:13.655998   62277 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:32:13.658989   62277 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:32:13.662367   62277 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:32:13.663058   62277 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:32:13.690487   62277 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:32:13.690610   62277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:13.751339   62277 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.742321782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:13.751456   62277 docker.go:319] overlay module found
	I1211 00:32:13.754440   62277 out.go:179] * Using the docker driver based on existing profile
	I1211 00:32:13.757141   62277 start.go:309] selected driver: docker
	I1211 00:32:13.757162   62277 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:13.757271   62277 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:32:13.760914   62277 out.go:203] 
	W1211 00:32:13.763763   62277 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1211 00:32:13.766569   62277 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-786978 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-786978 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (207.749106ms)

                                                
                                                
-- stdout --
	* [functional-786978] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:32:13.438512   62226 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:32:13.438724   62226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.438747   62226 out.go:374] Setting ErrFile to fd 2...
	I1211 00:32:13.438770   62226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:32:13.439185   62226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:32:13.439593   62226 out.go:368] Setting JSON to false
	I1211 00:32:13.440402   62226 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2620,"bootTime":1765410514,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1211 00:32:13.440486   62226 start.go:143] virtualization:  
	I1211 00:32:13.443990   62226 out.go:179] * [functional-786978] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1211 00:32:13.447125   62226 out.go:179]   - MINIKUBE_LOCATION=22061
	I1211 00:32:13.447188   62226 notify.go:221] Checking for updates...
	I1211 00:32:13.453116   62226 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 00:32:13.456044   62226 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	I1211 00:32:13.459081   62226 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	I1211 00:32:13.462282   62226 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1211 00:32:13.466250   62226 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 00:32:13.470290   62226 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1211 00:32:13.470844   62226 driver.go:422] Setting default libvirt URI to qemu:///system
	I1211 00:32:13.500014   62226 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1211 00:32:13.500195   62226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:32:13.567124   62226 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-11 00:32:13.557334331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:32:13.567235   62226 docker.go:319] overlay module found
	I1211 00:32:13.569975   62226 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1211 00:32:13.572574   62226 start.go:309] selected driver: docker
	I1211 00:32:13.572602   62226 start.go:927] validating driver "docker" against &{Name:functional-786978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HostOnlyCIDRv6:fd00::1/64 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-786978 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ServiceCIDRv6:fd00::/108 PodCIDR:10.244.0.0/16 PodCIDRv6: IPFamily:ipv4 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 IPv6: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Subnetv6: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: StaticIPv6: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 00:32:13.572699   62226 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 00:32:13.575978   62226 out.go:203] 
	W1211 00:32:13.578798   62226 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1211 00:32:13.581612   62226 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh -n functional-786978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cp functional-786978:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp829540402/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh -n functional-786978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh -n functional-786978 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4875/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo cat /etc/test/nested/copy/4875/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4875.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo cat /etc/ssl/certs/4875.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4875.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo cat /usr/share/ca-certificates/4875.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/48752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo cat /etc/ssl/certs/48752.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/48752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo cat /usr/share/ca-certificates/48752.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh "sudo systemctl is-active docker": exit status 1 (266.662174ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh "sudo systemctl is-active containerd": exit status 1 (276.923127ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-786978 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "346.867136ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.201566ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "338.401149ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "48.182462ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1860978438/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.742181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 00:32:07.055048    4875 retry.go:31] will retry after 263.968285ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1860978438/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh "sudo umount -f /mount-9p": exit status 1 (273.489148ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-786978 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1860978438/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T" /mount1: exit status 1 (558.75665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 00:32:08.925711    4875 retry.go:31] will retry after 456.83644ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-786978 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-786978 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1098791661/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-786978 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-786978
localhost/kicbase/echo-server:functional-786978
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-786978 image ls --format short --alsologtostderr:
I1211 00:32:26.043822   64414 out.go:360] Setting OutFile to fd 1 ...
I1211 00:32:26.043931   64414 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:26.043938   64414 out.go:374] Setting ErrFile to fd 2...
I1211 00:32:26.043943   64414 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:26.044337   64414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:32:26.045780   64414 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:26.045978   64414 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:26.046516   64414 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:32:26.066176   64414 ssh_runner.go:195] Run: systemctl --version
I1211 00:32:26.066235   64414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:32:26.083258   64414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
I1211 00:32:26.187780   64414 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-786978 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/kicbase/echo-server           │ functional-786978  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-786978  │ 8c9ecd8b77aa0 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ localhost/my-image                      │ functional-786978  │ ae62ae5c9540d │ 1.64MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-786978 image ls --format table --alsologtostderr:
I1211 00:32:30.777155   64947 out.go:360] Setting OutFile to fd 1 ...
I1211 00:32:30.777308   64947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:30.777341   64947 out.go:374] Setting ErrFile to fd 2...
I1211 00:32:30.777354   64947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:30.777613   64947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:32:30.778333   64947 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:30.778501   64947 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:30.779088   64947 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:32:30.796371   64947 ssh_runner.go:195] Run: systemctl --version
I1211 00:32:30.796426   64947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:32:30.814661   64947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
I1211 00:32:30.921594   64947 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-786978 image ls --format json --alsologtostderr:
[{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8c9ecd8b77aa03110e45ef538aaae6acc6b3c9527b835d1084135db03c1ee7c9","repoDigests":["localhost/minikube-local-cache-test@sha256:10335fb1718de8db4c544806630b89ba3c448a5064d78a291c59ee7ed866ffac"],"repoTags":["localhost/minikube-local-cache-test:functional-786978"],"size":"3330"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-sched
uler:v1.35.0-beta.0"],"size":"49822549"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"dc601463d9ed6c0c7ca12c2a2e3767bb219fdf4a47cdcc4200366ff1a19aeac0","repoDigests":["docker.io/library/6c73722d38906ae89a7f9d0d6582d6d4edeaa33674af2be6c8e95be02213598b-tmp@sha256:b69b25a4cf7dc8bf7d763d51b657097d9cc584b90a3bba766c62c42282445fdd"],"repoTags":[],"size":"1638178"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"
},{"id":"ae62ae5c9540d32a7fb5b59ae64782fb2192bab2e6029b0615f255e33331d654","repoDigests":["localhost/my-image@sha256:40f676a38830d28e46a2f48b65c2b34518bff87f76d55ccdeef51c176f74935a"],"repoTags":["localhost/my-image:functional-786978"],"size":"1640790"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"68b5f775f
18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ce2d2cda2d858fdae
a84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-786978"],"size":"4788229"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
"docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-786978 image ls --format json --alsologtostderr:
I1211 00:32:30.538837   64905 out.go:360] Setting OutFile to fd 1 ...
I1211 00:32:30.539338   64905 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:30.539354   64905 out.go:374] Setting ErrFile to fd 2...
I1211 00:32:30.539362   64905 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:30.540176   64905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:32:30.541264   64905 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:30.541459   64905 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:30.542193   64905 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:32:30.560630   64905 ssh_runner.go:195] Run: systemctl --version
I1211 00:32:30.560681   64905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:32:30.579214   64905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
I1211 00:32:30.682992   64905 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-786978 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-786978
size: "4788229"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 8c9ecd8b77aa03110e45ef538aaae6acc6b3c9527b835d1084135db03c1ee7c9
repoDigests:
- localhost/minikube-local-cache-test@sha256:10335fb1718de8db4c544806630b89ba3c448a5064d78a291c59ee7ed866ffac
repoTags:
- localhost/minikube-local-cache-test:functional-786978
size: "3330"
- id: ae62ae5c9540d32a7fb5b59ae64782fb2192bab2e6029b0615f255e33331d654
repoDigests:
- localhost/my-image@sha256:40f676a38830d28e46a2f48b65c2b34518bff87f76d55ccdeef51c176f74935a
repoTags:
- localhost/my-image:functional-786978
size: "1640790"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: dc601463d9ed6c0c7ca12c2a2e3767bb219fdf4a47cdcc4200366ff1a19aeac0
repoDigests:
- docker.io/library/6c73722d38906ae89a7f9d0d6582d6d4edeaa33674af2be6c8e95be02213598b-tmp@sha256:b69b25a4cf7dc8bf7d763d51b657097d9cc584b90a3bba766c62c42282445fdd
repoTags: []
size: "1638178"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-786978 image ls --format yaml --alsologtostderr:
I1211 00:32:30.312399   64869 out.go:360] Setting OutFile to fd 1 ...
I1211 00:32:30.312641   64869 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:30.312669   64869 out.go:374] Setting ErrFile to fd 2...
I1211 00:32:30.312690   64869 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:30.312985   64869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:32:30.313646   64869 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:30.313828   64869 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:30.314386   64869 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:32:30.331737   64869 ssh_runner.go:195] Run: systemctl --version
I1211 00:32:30.331795   64869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:32:30.348893   64869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
I1211 00:32:30.453739   64869 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-786978 ssh pgrep buildkitd: exit status 1 (258.76548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image build -t localhost/my-image:functional-786978 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-786978 image build -t localhost/my-image:functional-786978 testdata/build --alsologtostderr: (3.387582919s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-786978 image build -t localhost/my-image:functional-786978 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> dc601463d9e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-786978
--> ae62ae5c954
Successfully tagged localhost/my-image:functional-786978
ae62ae5c9540d32a7fb5b59ae64782fb2192bab2e6029b0615f255e33331d654
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-786978 image build -t localhost/my-image:functional-786978 testdata/build --alsologtostderr:
I1211 00:32:26.676775   64558 out.go:360] Setting OutFile to fd 1 ...
I1211 00:32:26.676896   64558 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:26.676905   64558 out.go:374] Setting ErrFile to fd 2...
I1211 00:32:26.676911   64558 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1211 00:32:26.677155   64558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
I1211 00:32:26.677763   64558 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:26.678505   64558 config.go:182] Loaded profile config "functional-786978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1211 00:32:26.679207   64558 cli_runner.go:164] Run: docker container inspect functional-786978 --format={{.State.Status}}
I1211 00:32:26.696575   64558 ssh_runner.go:195] Run: systemctl --version
I1211 00:32:26.696641   64558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-786978
I1211 00:32:26.713347   64558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/functional-786978/id_rsa Username:docker}
I1211 00:32:26.813013   64558 build_images.go:162] Building image from path: /tmp/build.65683187.tar
I1211 00:32:26.813079   64558 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1211 00:32:26.820376   64558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.65683187.tar
I1211 00:32:26.823956   64558 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.65683187.tar: stat -c "%s %y" /var/lib/minikube/build/build.65683187.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.65683187.tar': No such file or directory
I1211 00:32:26.823988   64558 ssh_runner.go:362] scp /tmp/build.65683187.tar --> /var/lib/minikube/build/build.65683187.tar (3072 bytes)
I1211 00:32:26.841086   64558 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.65683187
I1211 00:32:26.848557   64558 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.65683187 -xf /var/lib/minikube/build/build.65683187.tar
I1211 00:32:26.856260   64558 crio.go:315] Building image: /var/lib/minikube/build/build.65683187
I1211 00:32:26.856356   64558 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-786978 /var/lib/minikube/build/build.65683187 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1211 00:32:29.986715   64558 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-786978 /var/lib/minikube/build/build.65683187 --cgroup-manager=cgroupfs: (3.130329745s)
I1211 00:32:29.986780   64558 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.65683187
I1211 00:32:29.994465   64558 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.65683187.tar
I1211 00:32:30.002126   64558 build_images.go:218] Built localhost/my-image:functional-786978 from /tmp/build.65683187.tar
I1211 00:32:30.002156   64558 build_images.go:134] succeeded building to: functional-786978
I1211 00:32:30.002162   64558 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-786978
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image load --daemon kicbase/echo-server:functional-786978 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image load --daemon kicbase/echo-server:functional-786978 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-786978
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image load --daemon kicbase/echo-server:functional-786978 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls
E1211 00:32:21.217050    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image save kicbase/echo-server:functional-786978 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image rm kicbase/echo-server:functional-786978 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-786978
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 image save --daemon kicbase/echo-server:functional-786978 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-786978
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-786978 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-786978
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-786978
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-786978
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (194.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1211 00:35:09.647952    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:15.963794    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:15.970136    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:15.981575    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:16.002865    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:16.044323    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:16.125813    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:16.287371    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:16.608713    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:17.250075    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:18.532162    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:21.094076    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:26.216305    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:36.458368    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:35:56.940178    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:36:37.903145    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:37:21.216466    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m13.648180088s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (194.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 kubectl -- rollout status deployment/busybox: (4.626747658s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-bpbjh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-mx77k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-n9zxf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-bpbjh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-mx77k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-n9zxf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-bpbjh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-mx77k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-n9zxf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-bpbjh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-bpbjh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-mx77k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-mx77k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-n9zxf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 kubectl -- exec busybox-7b57f96db7-n9zxf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 node add --alsologtostderr -v 5
E1211 00:37:59.825206    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 node add --alsologtostderr -v 5: (59.038441642s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5: (1.046726136s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-295033 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.140524932s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 status --output json --alsologtostderr -v 5: (1.158662326s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp testdata/cp-test.txt ha-295033:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983111103/001/cp-test_ha-295033.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033:/home/docker/cp-test.txt ha-295033-m02:/home/docker/cp-test_ha-295033_ha-295033-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test_ha-295033_ha-295033-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033:/home/docker/cp-test.txt ha-295033-m03:/home/docker/cp-test_ha-295033_ha-295033-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test_ha-295033_ha-295033-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033:/home/docker/cp-test.txt ha-295033-m04:/home/docker/cp-test_ha-295033_ha-295033-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test_ha-295033_ha-295033-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp testdata/cp-test.txt ha-295033-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983111103/001/cp-test_ha-295033-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m02:/home/docker/cp-test.txt ha-295033:/home/docker/cp-test_ha-295033-m02_ha-295033.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test_ha-295033-m02_ha-295033.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m02:/home/docker/cp-test.txt ha-295033-m03:/home/docker/cp-test_ha-295033-m02_ha-295033-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test_ha-295033-m02_ha-295033-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m02:/home/docker/cp-test.txt ha-295033-m04:/home/docker/cp-test_ha-295033-m02_ha-295033-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test_ha-295033-m02_ha-295033-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp testdata/cp-test.txt ha-295033-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983111103/001/cp-test_ha-295033-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m03:/home/docker/cp-test.txt ha-295033:/home/docker/cp-test_ha-295033-m03_ha-295033.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test_ha-295033-m03_ha-295033.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m03:/home/docker/cp-test.txt ha-295033-m02:/home/docker/cp-test_ha-295033-m03_ha-295033-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test_ha-295033-m03_ha-295033-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m03:/home/docker/cp-test.txt ha-295033-m04:/home/docker/cp-test_ha-295033-m03_ha-295033-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test_ha-295033-m03_ha-295033-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp testdata/cp-test.txt ha-295033-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1983111103/001/cp-test_ha-295033-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m04:/home/docker/cp-test.txt ha-295033:/home/docker/cp-test_ha-295033-m04_ha-295033.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033 "sudo cat /home/docker/cp-test_ha-295033-m04_ha-295033.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m04:/home/docker/cp-test.txt ha-295033-m02:/home/docker/cp-test_ha-295033-m04_ha-295033-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m02 "sudo cat /home/docker/cp-test_ha-295033-m04_ha-295033-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 cp ha-295033-m04:/home/docker/cp-test.txt ha-295033-m03:/home/docker/cp-test_ha-295033-m04_ha-295033-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 ssh -n ha-295033-m03 "sudo cat /home/docker/cp-test_ha-295033-m04_ha-295033-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 node stop m02 --alsologtostderr -v 5: (12.05163085s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5: exit status 7 (825.051934ms)

                                                
                                                
-- stdout --
	ha-295033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-295033-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-295033-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-295033-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:39:17.551345   80697 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:39:17.551485   80697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:39:17.551495   80697 out.go:374] Setting ErrFile to fd 2...
	I1211 00:39:17.551501   80697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:39:17.551787   80697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:39:17.552006   80697 out.go:368] Setting JSON to false
	I1211 00:39:17.552046   80697 mustload.go:66] Loading cluster: ha-295033
	I1211 00:39:17.552142   80697 notify.go:221] Checking for updates...
	I1211 00:39:17.555306   80697 config.go:182] Loaded profile config "ha-295033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 00:39:17.555338   80697 status.go:174] checking status of ha-295033 ...
	I1211 00:39:17.555939   80697 cli_runner.go:164] Run: docker container inspect ha-295033 --format={{.State.Status}}
	I1211 00:39:17.577123   80697 status.go:371] ha-295033 host status = "Running" (err=<nil>)
	I1211 00:39:17.577148   80697 host.go:66] Checking if "ha-295033" exists ...
	I1211 00:39:17.577625   80697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-295033
	I1211 00:39:17.611205   80697 host.go:66] Checking if "ha-295033" exists ...
	I1211 00:39:17.611615   80697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:39:17.611679   80697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-295033
	I1211 00:39:17.636517   80697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/ha-295033/id_rsa Username:docker}
	I1211 00:39:17.740755   80697 ssh_runner.go:195] Run: systemctl --version
	I1211 00:39:17.747629   80697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:39:17.761604   80697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:39:17.835327   80697 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-11 00:39:17.826183019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:39:17.835854   80697 kubeconfig.go:125] found "ha-295033" server: "https://192.168.49.254:8443"
	I1211 00:39:17.835886   80697 api_server.go:166] Checking apiserver status ...
	I1211 00:39:17.835937   80697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:39:17.847870   80697 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1211 00:39:17.856436   80697 api_server.go:182] apiserver freezer: "8:freezer:/docker/859878e16e8358b0f1b16ffb17ccf96f4bceddf02db98e1cf391cc3d0c7e45a5/crio/crio-718ea8b2becc8db82b716be79ccfefb170ba194015172614afc40feb2d3a565f"
	I1211 00:39:17.856509   80697 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/859878e16e8358b0f1b16ffb17ccf96f4bceddf02db98e1cf391cc3d0c7e45a5/crio/crio-718ea8b2becc8db82b716be79ccfefb170ba194015172614afc40feb2d3a565f/freezer.state
	I1211 00:39:17.866234   80697 api_server.go:204] freezer state: "THAWED"
	I1211 00:39:17.866261   80697 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1211 00:39:17.876481   80697 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1211 00:39:17.876510   80697 status.go:463] ha-295033 apiserver status = Running (err=<nil>)
	I1211 00:39:17.876520   80697 status.go:176] ha-295033 status: &{Name:ha-295033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:39:17.876536   80697 status.go:174] checking status of ha-295033-m02 ...
	I1211 00:39:17.876860   80697 cli_runner.go:164] Run: docker container inspect ha-295033-m02 --format={{.State.Status}}
	I1211 00:39:17.904804   80697 status.go:371] ha-295033-m02 host status = "Stopped" (err=<nil>)
	I1211 00:39:17.904833   80697 status.go:384] host is not running, skipping remaining checks
	I1211 00:39:17.904840   80697 status.go:176] ha-295033-m02 status: &{Name:ha-295033-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:39:17.904861   80697 status.go:174] checking status of ha-295033-m03 ...
	I1211 00:39:17.905175   80697 cli_runner.go:164] Run: docker container inspect ha-295033-m03 --format={{.State.Status}}
	I1211 00:39:17.928393   80697 status.go:371] ha-295033-m03 host status = "Running" (err=<nil>)
	I1211 00:39:17.928417   80697 host.go:66] Checking if "ha-295033-m03" exists ...
	I1211 00:39:17.928748   80697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-295033-m03
	I1211 00:39:17.947220   80697 host.go:66] Checking if "ha-295033-m03" exists ...
	I1211 00:39:17.947539   80697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:39:17.947588   80697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-295033-m03
	I1211 00:39:17.967163   80697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/ha-295033-m03/id_rsa Username:docker}
	I1211 00:39:18.072856   80697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:39:18.087384   80697 kubeconfig.go:125] found "ha-295033" server: "https://192.168.49.254:8443"
	I1211 00:39:18.087414   80697 api_server.go:166] Checking apiserver status ...
	I1211 00:39:18.087457   80697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:39:18.099790   80697 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	I1211 00:39:18.110006   80697 api_server.go:182] apiserver freezer: "8:freezer:/docker/a6a56bb2e72edb5dc2d3b23df2b603b16bf4067a1dfc464cdbd1b77e5836d51e/crio/crio-4db79f07eacc4b9129f9928d90e68363e8b42b2b37e6a2c9c91ad5f2db7a4741"
	I1211 00:39:18.110078   80697 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a6a56bb2e72edb5dc2d3b23df2b603b16bf4067a1dfc464cdbd1b77e5836d51e/crio/crio-4db79f07eacc4b9129f9928d90e68363e8b42b2b37e6a2c9c91ad5f2db7a4741/freezer.state
	I1211 00:39:18.118368   80697 api_server.go:204] freezer state: "THAWED"
	I1211 00:39:18.118394   80697 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1211 00:39:18.126839   80697 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1211 00:39:18.126915   80697 status.go:463] ha-295033-m03 apiserver status = Running (err=<nil>)
	I1211 00:39:18.126930   80697 status.go:176] ha-295033-m03 status: &{Name:ha-295033-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:39:18.126947   80697 status.go:174] checking status of ha-295033-m04 ...
	I1211 00:39:18.127357   80697 cli_runner.go:164] Run: docker container inspect ha-295033-m04 --format={{.State.Status}}
	I1211 00:39:18.146415   80697 status.go:371] ha-295033-m04 host status = "Running" (err=<nil>)
	I1211 00:39:18.146439   80697 host.go:66] Checking if "ha-295033-m04" exists ...
	I1211 00:39:18.146880   80697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-295033-m04
	I1211 00:39:18.170296   80697 host.go:66] Checking if "ha-295033-m04" exists ...
	I1211 00:39:18.170620   80697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:39:18.170667   80697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-295033-m04
	I1211 00:39:18.190229   80697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/ha-295033-m04/id_rsa Username:docker}
	I1211 00:39:18.296141   80697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:39:18.311221   80697 status.go:176] ha-295033-m04 status: &{Name:ha-295033-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (27.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 node start m02 --alsologtostderr -v 5: (25.975268906s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5: (1.242603403s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (27.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.040521269s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (129.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 stop --alsologtostderr -v 5
E1211 00:40:09.649326    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:40:15.961096    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:40:24.284521    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 stop --alsologtostderr -v 5: (37.256514872s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 start --wait true --alsologtostderr -v 5
E1211 00:40:43.666817    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 start --wait true --alsologtostderr -v 5: (1m31.592897146s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (129.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 node delete m03 --alsologtostderr -v 5: (11.176821568s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 stop --alsologtostderr -v 5
E1211 00:42:21.220137    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 stop --alsologtostderr -v 5: (36.06292959s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5: exit status 7 (130.839933ms)

                                                
                                                
-- stdout --
	ha-295033
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-295033-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-295033-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:42:45.540811   92547 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:42:45.540940   92547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:42:45.540951   92547 out.go:374] Setting ErrFile to fd 2...
	I1211 00:42:45.540957   92547 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:42:45.541198   92547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:42:45.541379   92547 out.go:368] Setting JSON to false
	I1211 00:42:45.541405   92547 mustload.go:66] Loading cluster: ha-295033
	I1211 00:42:45.541817   92547 config.go:182] Loaded profile config "ha-295033": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 00:42:45.541843   92547 status.go:174] checking status of ha-295033 ...
	I1211 00:42:45.542322   92547 cli_runner.go:164] Run: docker container inspect ha-295033 --format={{.State.Status}}
	I1211 00:42:45.542553   92547 notify.go:221] Checking for updates...
	I1211 00:42:45.560274   92547 status.go:371] ha-295033 host status = "Stopped" (err=<nil>)
	I1211 00:42:45.560294   92547 status.go:384] host is not running, skipping remaining checks
	I1211 00:42:45.560300   92547 status.go:176] ha-295033 status: &{Name:ha-295033 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:42:45.560333   92547 status.go:174] checking status of ha-295033-m02 ...
	I1211 00:42:45.560643   92547 cli_runner.go:164] Run: docker container inspect ha-295033-m02 --format={{.State.Status}}
	I1211 00:42:45.592119   92547 status.go:371] ha-295033-m02 host status = "Stopped" (err=<nil>)
	I1211 00:42:45.592143   92547 status.go:384] host is not running, skipping remaining checks
	I1211 00:42:45.592159   92547 status.go:176] ha-295033-m02 status: &{Name:ha-295033-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:42:45.592176   92547 status.go:174] checking status of ha-295033-m04 ...
	I1211 00:42:45.592453   92547 cli_runner.go:164] Run: docker container inspect ha-295033-m04 --format={{.State.Status}}
	I1211 00:42:45.610254   92547 status.go:371] ha-295033-m04 host status = "Stopped" (err=<nil>)
	I1211 00:42:45.610275   92547 status.go:384] host is not running, skipping remaining checks
	I1211 00:42:45.610281   92547 status.go:176] ha-295033-m04 status: &{Name:ha-295033-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (75.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m13.898096435s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5: (1.288413117s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (75.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 node add --control-plane --alsologtostderr -v 5
E1211 00:45:09.648243    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:45:15.961436    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 node add --control-plane --alsologtostderr -v 5: (1m21.416262319s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-295033 status --alsologtostderr -v 5: (1.080586475s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.086074102s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-759096 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-759096 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m17.310507434s)
--- PASS: TestJSONOutput/start/Command (77.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-759096 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-759096 --output=json --user=testUser: (5.853690384s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-837805 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-837805 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.090447ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e8dfccba-c800-4f38-a471-88903570eca1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-837805] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"292fe2e4-b3a0-4911-9a0e-692daac3faff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22061"}}
	{"specversion":"1.0","id":"57003e37-fc14-417c-a7d1-3e2a343bfa1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6f1e3324-f80f-4991-84d3-472e2bc57989","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig"}}
	{"specversion":"1.0","id":"734bce97-4698-4342-aa88-9dbdcd324e64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube"}}
	{"specversion":"1.0","id":"cd64c57f-2028-4d73-bea2-89c30c9b4d30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bb985d1e-75d9-40f1-898f-aa3d2394f4bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e4d77141-29c1-4b34-af8e-c1d981776be3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-837805" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-837805
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (61.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-293569 --network=
E1211 00:47:21.217076    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-293569 --network=: (59.427019399s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-293569" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-293569
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-293569: (2.204701526s)
--- PASS: TestKicCustomNetwork/create_custom_network (61.65s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-447218 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-447218 --network=bridge: (33.392553214s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-447218" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-447218
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-447218: (2.155832067s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.57s)

                                                
                                    
x
+
TestKicExistingNetwork (36.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1211 00:48:43.581300    4875 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1211 00:48:43.597189    4875 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1211 00:48:43.597270    4875 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1211 00:48:43.597287    4875 cli_runner.go:164] Run: docker network inspect existing-network
W1211 00:48:43.611417    4875 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1211 00:48:43.611446    4875 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1211 00:48:43.611462    4875 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1211 00:48:43.611570    4875 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1211 00:48:43.628778    4875 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7dc124717d46 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:5d:ee:ae:c9:fd} reservation:<nil>}
I1211 00:48:43.629155    4875 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001de7030}
I1211 00:48:43.629182    4875 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1211 00:48:43.629254    4875 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1211 00:48:43.689993    4875 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-015026 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-015026 --network=existing-network: (34.001221489s)
helpers_test.go:176: Cleaning up "existing-network-015026" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-015026
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-015026: (2.00750842s)
I1211 00:49:19.715179    4875 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.15s)

                                                
                                    
x
+
TestKicCustomSubnet (34.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-781278 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-781278 --subnet=192.168.60.0/24: (32.095675369s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-781278 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-781278" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-781278
E1211 00:49:52.726891    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-781278: (2.122117541s)
--- PASS: TestKicCustomSubnet (34.24s)

                                                
                                    
x
+
TestKicStaticIP (35.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-159449 --static-ip=192.168.200.200
E1211 00:50:09.648476    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:50:15.963107    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-159449 --static-ip=192.168.200.200: (32.708806143s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-159449 ip
helpers_test.go:176: Cleaning up "static-ip-159449" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-159449
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-159449: (2.275651777s)
--- PASS: TestKicStaticIP (35.15s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-055505 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-055505 --driver=docker  --container-runtime=crio: (28.301410786s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-058664 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-058664 --driver=docker  --container-runtime=crio: (35.122598863s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-055505
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-058664
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-058664" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-058664
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-058664: (2.056248402s)
helpers_test.go:176: Cleaning up "first-055505" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-055505
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-055505: (2.070190333s)
--- PASS: TestMinikubeProfile (69.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-115240 --memory=3072 --mount-string /tmp/TestMountStartserial435612281/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1211 00:51:39.030400    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-115240 --memory=3072 --mount-string /tmp/TestMountStartserial435612281/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.172485903s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-115240 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-117094 --memory=3072 --mount-string /tmp/TestMountStartserial435612281/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-117094 --memory=3072 --mount-string /tmp/TestMountStartserial435612281/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.711604893s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-117094 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-115240 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-115240 --alsologtostderr -v=5: (1.71242345s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-117094 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-117094
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-117094: (1.285194895s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-117094
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-117094: (7.414432074s)
--- PASS: TestMountStart/serial/RestartStopped (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-117094 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-872911 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1211 00:52:21.216861    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-872911 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.354182048s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-872911 -- rollout status deployment/busybox: (2.925232451s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-b4tqd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-g86c6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-b4tqd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-g86c6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-b4tqd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-g86c6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-b4tqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-b4tqd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-g86c6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-872911 -- exec busybox-7b57f96db7-g86c6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-872911 -v=5 --alsologtostderr
E1211 00:55:09.648003    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 00:55:15.961020    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-872911 -v=5 --alsologtostderr: (57.169842178s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-872911 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp testdata/cp-test.txt multinode-872911:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1984241520/001/cp-test_multinode-872911.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911:/home/docker/cp-test.txt multinode-872911-m02:/home/docker/cp-test_multinode-872911_multinode-872911-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m02 "sudo cat /home/docker/cp-test_multinode-872911_multinode-872911-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911:/home/docker/cp-test.txt multinode-872911-m03:/home/docker/cp-test_multinode-872911_multinode-872911-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m03 "sudo cat /home/docker/cp-test_multinode-872911_multinode-872911-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp testdata/cp-test.txt multinode-872911-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1984241520/001/cp-test_multinode-872911-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911-m02:/home/docker/cp-test.txt multinode-872911:/home/docker/cp-test_multinode-872911-m02_multinode-872911.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911 "sudo cat /home/docker/cp-test_multinode-872911-m02_multinode-872911.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911-m02:/home/docker/cp-test.txt multinode-872911-m03:/home/docker/cp-test_multinode-872911-m02_multinode-872911-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m03 "sudo cat /home/docker/cp-test_multinode-872911-m02_multinode-872911-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp testdata/cp-test.txt multinode-872911-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1984241520/001/cp-test_multinode-872911-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911-m03:/home/docker/cp-test.txt multinode-872911:/home/docker/cp-test_multinode-872911-m03_multinode-872911.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911 "sudo cat /home/docker/cp-test_multinode-872911-m03_multinode-872911.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 cp multinode-872911-m03:/home/docker/cp-test.txt multinode-872911-m02:/home/docker/cp-test_multinode-872911-m03_multinode-872911-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 ssh -n multinode-872911-m02 "sudo cat /home/docker/cp-test_multinode-872911-m03_multinode-872911-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-872911 node stop m03: (1.321034826s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-872911 status: exit status 7 (555.663188ms)

                                                
                                                
-- stdout --
	multinode-872911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-872911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-872911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr: exit status 7 (577.145435ms)

                                                
                                                
-- stdout --
	multinode-872911
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-872911-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-872911-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:55:46.214249  142999 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:55:46.214430  142999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:55:46.214461  142999 out.go:374] Setting ErrFile to fd 2...
	I1211 00:55:46.214482  142999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:55:46.214804  142999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:55:46.215064  142999 out.go:368] Setting JSON to false
	I1211 00:55:46.215122  142999 mustload.go:66] Loading cluster: multinode-872911
	I1211 00:55:46.215216  142999 notify.go:221] Checking for updates...
	I1211 00:55:46.215660  142999 config.go:182] Loaded profile config "multinode-872911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 00:55:46.215944  142999 status.go:174] checking status of multinode-872911 ...
	I1211 00:55:46.216667  142999 cli_runner.go:164] Run: docker container inspect multinode-872911 --format={{.State.Status}}
	I1211 00:55:46.237213  142999 status.go:371] multinode-872911 host status = "Running" (err=<nil>)
	I1211 00:55:46.237236  142999 host.go:66] Checking if "multinode-872911" exists ...
	I1211 00:55:46.237559  142999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-872911
	I1211 00:55:46.271480  142999 host.go:66] Checking if "multinode-872911" exists ...
	I1211 00:55:46.271806  142999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:55:46.271860  142999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-872911
	I1211 00:55:46.289543  142999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/multinode-872911/id_rsa Username:docker}
	I1211 00:55:46.396489  142999 ssh_runner.go:195] Run: systemctl --version
	I1211 00:55:46.402877  142999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:55:46.416396  142999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1211 00:55:46.484897  142999 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-11 00:55:46.469032649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1211 00:55:46.485703  142999 kubeconfig.go:125] found "multinode-872911" server: "https://192.168.67.2:8443"
	I1211 00:55:46.485742  142999 api_server.go:166] Checking apiserver status ...
	I1211 00:55:46.485799  142999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 00:55:46.500478  142999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1280/cgroup
	I1211 00:55:46.510169  142999 api_server.go:182] apiserver freezer: "8:freezer:/docker/f6e47ee918bc99633aee80c5f45324984739f02bc19c86f77b6ba6584f2483a1/crio/crio-4afba5993dd6a4e19f31a689eccfe11d37074fbbd99880ee180efcf9b5c96da4"
	I1211 00:55:46.510258  142999 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f6e47ee918bc99633aee80c5f45324984739f02bc19c86f77b6ba6584f2483a1/crio/crio-4afba5993dd6a4e19f31a689eccfe11d37074fbbd99880ee180efcf9b5c96da4/freezer.state
	I1211 00:55:46.518208  142999 api_server.go:204] freezer state: "THAWED"
	I1211 00:55:46.518237  142999 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1211 00:55:46.526323  142999 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1211 00:55:46.526359  142999 status.go:463] multinode-872911 apiserver status = Running (err=<nil>)
	I1211 00:55:46.526370  142999 status.go:176] multinode-872911 status: &{Name:multinode-872911 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:55:46.526385  142999 status.go:174] checking status of multinode-872911-m02 ...
	I1211 00:55:46.526694  142999 cli_runner.go:164] Run: docker container inspect multinode-872911-m02 --format={{.State.Status}}
	I1211 00:55:46.546203  142999 status.go:371] multinode-872911-m02 host status = "Running" (err=<nil>)
	I1211 00:55:46.546227  142999 host.go:66] Checking if "multinode-872911-m02" exists ...
	I1211 00:55:46.546535  142999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-872911-m02
	I1211 00:55:46.564491  142999 host.go:66] Checking if "multinode-872911-m02" exists ...
	I1211 00:55:46.564840  142999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 00:55:46.564899  142999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-872911-m02
	I1211 00:55:46.588664  142999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22061-2739/.minikube/machines/multinode-872911-m02/id_rsa Username:docker}
	I1211 00:55:46.696400  142999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 00:55:46.709203  142999 status.go:176] multinode-872911-m02 status: &{Name:multinode-872911-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:55:46.709239  142999 status.go:174] checking status of multinode-872911-m03 ...
	I1211 00:55:46.709546  142999 cli_runner.go:164] Run: docker container inspect multinode-872911-m03 --format={{.State.Status}}
	I1211 00:55:46.729659  142999 status.go:371] multinode-872911-m03 host status = "Stopped" (err=<nil>)
	I1211 00:55:46.729685  142999 status.go:384] host is not running, skipping remaining checks
	I1211 00:55:46.729706  142999 status.go:176] multinode-872911-m03 status: &{Name:multinode-872911-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-872911 node start m03 -v=5 --alsologtostderr: (7.392188224s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-872911
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-872911
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-872911: (25.035514156s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-872911 --wait=true -v=5 --alsologtostderr
E1211 00:57:04.285918    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-872911 --wait=true -v=5 --alsologtostderr: (56.696434838s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-872911
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 node delete m03
E1211 00:57:21.216452    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-872911 node delete m03: (4.914987862s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-872911 stop: (23.785626611s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-872911 status: exit status 7 (94.466184ms)

                                                
                                                
-- stdout --
	multinode-872911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-872911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr: exit status 7 (101.545185ms)

                                                
                                                
-- stdout --
	multinode-872911
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-872911-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 00:57:46.334368  150850 out.go:360] Setting OutFile to fd 1 ...
	I1211 00:57:46.334554  150850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:57:46.334580  150850 out.go:374] Setting ErrFile to fd 2...
	I1211 00:57:46.334600  150850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 00:57:46.335033  150850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 00:57:46.335308  150850 out.go:368] Setting JSON to false
	I1211 00:57:46.335355  150850 mustload.go:66] Loading cluster: multinode-872911
	I1211 00:57:46.336068  150850 config.go:182] Loaded profile config "multinode-872911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 00:57:46.336106  150850 status.go:174] checking status of multinode-872911 ...
	I1211 00:57:46.336836  150850 notify.go:221] Checking for updates...
	I1211 00:57:46.337188  150850 cli_runner.go:164] Run: docker container inspect multinode-872911 --format={{.State.Status}}
	I1211 00:57:46.357217  150850 status.go:371] multinode-872911 host status = "Stopped" (err=<nil>)
	I1211 00:57:46.357240  150850 status.go:384] host is not running, skipping remaining checks
	I1211 00:57:46.357251  150850 status.go:176] multinode-872911 status: &{Name:multinode-872911 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1211 00:57:46.357279  150850 status.go:174] checking status of multinode-872911-m02 ...
	I1211 00:57:46.357588  150850 cli_runner.go:164] Run: docker container inspect multinode-872911-m02 --format={{.State.Status}}
	I1211 00:57:46.389667  150850 status.go:371] multinode-872911-m02 host status = "Stopped" (err=<nil>)
	I1211 00:57:46.389692  150850 status.go:384] host is not running, skipping remaining checks
	I1211 00:57:46.389709  150850 status.go:176] multinode-872911-m02 status: &{Name:multinode-872911-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-872911 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-872911 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.04149063s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-872911 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-872911
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-872911-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-872911-m02 --driver=docker  --container-runtime=crio: exit status 14 (89.51642ms)

                                                
                                                
-- stdout --
	* [multinode-872911-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-872911-m02' is duplicated with machine name 'multinode-872911-m02' in profile 'multinode-872911'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-872911-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-872911-m03 --driver=docker  --container-runtime=crio: (34.800565545s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-872911
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-872911: exit status 80 (358.939533ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-872911 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-872911-m03 already exists in multinode-872911-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-872911-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-872911-m03: (2.069828157s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.38s)

                                                
                                    
x
+
TestPreload (124.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-805156 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1211 01:00:09.647833    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:00:15.960564    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-805156 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m2.5789284s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-805156 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-805156 image pull gcr.io/k8s-minikube/busybox: (2.091880226s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-805156
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-805156: (5.937502301s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-805156 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-805156 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.980425667s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-805156 image list
helpers_test.go:176: Cleaning up "test-preload-805156" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-805156
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-805156: (2.42880241s)
--- PASS: TestPreload (124.28s)

                                                
                                    
x
+
TestScheduledStopUnix (104.62s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-500949 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-500949 --memory=3072 --driver=docker  --container-runtime=crio: (28.019744065s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500949 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1211 01:01:52.111549  164907 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:01:52.111771  164907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:01:52.111834  164907 out.go:374] Setting ErrFile to fd 2...
	I1211 01:01:52.111856  164907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:01:52.112445  164907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:01:52.113248  164907 out.go:368] Setting JSON to false
	I1211 01:01:52.113436  164907 mustload.go:66] Loading cluster: scheduled-stop-500949
	I1211 01:01:52.113869  164907 config.go:182] Loaded profile config "scheduled-stop-500949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:01:52.113992  164907 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/config.json ...
	I1211 01:01:52.114230  164907 mustload.go:66] Loading cluster: scheduled-stop-500949
	I1211 01:01:52.114402  164907 config.go:182] Loaded profile config "scheduled-stop-500949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-500949 -n scheduled-stop-500949
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500949 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1211 01:01:52.579249  164993 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:01:52.579345  164993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:01:52.579355  164993 out.go:374] Setting ErrFile to fd 2...
	I1211 01:01:52.579361  164993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:01:52.579706  164993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:01:52.579982  164993 out.go:368] Setting JSON to false
	I1211 01:01:52.582048  164993 daemonize_unix.go:73] killing process 164925 as it is an old scheduled stop
	I1211 01:01:52.583107  164993 mustload.go:66] Loading cluster: scheduled-stop-500949
	I1211 01:01:52.583639  164993 config.go:182] Loaded profile config "scheduled-stop-500949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:01:52.583772  164993 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/config.json ...
	I1211 01:01:52.584000  164993 mustload.go:66] Loading cluster: scheduled-stop-500949
	I1211 01:01:52.584169  164993 config.go:182] Loaded profile config "scheduled-stop-500949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1211 01:01:52.588559    4875 retry.go:31] will retry after 70.285µs: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.588765    4875 retry.go:31] will retry after 159.849µs: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.589324    4875 retry.go:31] will retry after 222.602µs: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.589839    4875 retry.go:31] will retry after 258.191µs: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.590953    4875 retry.go:31] will retry after 316.601µs: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.592124    4875 retry.go:31] will retry after 637.929µs: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.593229    4875 retry.go:31] will retry after 1.437927ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.595450    4875 retry.go:31] will retry after 1.18162ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.597602    4875 retry.go:31] will retry after 2.4901ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.600828    4875 retry.go:31] will retry after 4.748708ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.606069    4875 retry.go:31] will retry after 5.471587ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.612702    4875 retry.go:31] will retry after 7.915473ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.620932    4875 retry.go:31] will retry after 18.221526ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.640135    4875 retry.go:31] will retry after 13.449562ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.654327    4875 retry.go:31] will retry after 31.137104ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
I1211 01:01:52.686573    4875 retry.go:31] will retry after 46.955832ms: open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500949 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500949 -n scheduled-stop-500949
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-500949
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500949 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1211 01:02:18.539277  165357 out.go:360] Setting OutFile to fd 1 ...
	I1211 01:02:18.539429  165357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:02:18.539441  165357 out.go:374] Setting ErrFile to fd 2...
	I1211 01:02:18.539446  165357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1211 01:02:18.539694  165357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22061-2739/.minikube/bin
	I1211 01:02:18.539940  165357 out.go:368] Setting JSON to false
	I1211 01:02:18.540029  165357 mustload.go:66] Loading cluster: scheduled-stop-500949
	I1211 01:02:18.540413  165357 config.go:182] Loaded profile config "scheduled-stop-500949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1211 01:02:18.540491  165357 profile.go:143] Saving config to /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/scheduled-stop-500949/config.json ...
	I1211 01:02:18.540679  165357 mustload.go:66] Loading cluster: scheduled-stop-500949
	I1211 01:02:18.540805  165357 config.go:182] Loaded profile config "scheduled-stop-500949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1211 01:02:21.216082    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-500949
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-500949: exit status 7 (69.679168ms)

                                                
                                                
-- stdout --
	scheduled-stop-500949
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500949 -n scheduled-stop-500949
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500949 -n scheduled-stop-500949: exit status 7 (66.339675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-500949" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-500949
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-500949: (4.961885029s)
--- PASS: TestScheduledStopUnix (104.62s)

                                                
                                    
x
+
TestInsufficientStorage (12.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-930985 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-930985 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.104190232s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"08927499-6fcf-4a60-b08e-13d18089d550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-930985] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8822a316-015e-4a01-9a3a-cf2b9d786bf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22061"}}
	{"specversion":"1.0","id":"a426bf8b-effc-42cd-9894-18e54b6b369c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3fe54c9c-9263-4fc9-a705-4703d35f1a55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig"}}
	{"specversion":"1.0","id":"52c87376-9cf0-45b1-9b43-deac4f860d96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube"}}
	{"specversion":"1.0","id":"355a1468-11ea-4a88-ae3b-d3a2c94fa2ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7e7a7d46-b24e-4a60-86f9-7c3a649907db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"95742319-59e0-42d8-bdbb-6b956a085ec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b9e3bef9-5328-4d1e-ae9d-ef29416c060e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0fbfc630-e4b5-44c4-9409-5000421db97c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fef979f9-74c7-4087-b114-0ee9351c06a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8405c5c2-5f70-4be3-b515-a20ac1803435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-930985\" primary control-plane node in \"insufficient-storage-930985\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e73d3c3e-3f0c-420c-a258-38c480a65440","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ec70e9e-da3c-4863-bb0a-95cca0fc1f49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7152122c-2aba-402b-b14c-b0929c892ea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-930985 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-930985 --output=json --layout=cluster: exit status 7 (307.189067ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-930985","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-930985","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 01:03:19.043855  167076 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-930985" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-930985 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-930985 --output=json --layout=cluster: exit status 7 (307.740088ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-930985","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-930985","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 01:03:19.351823  167144 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-930985" does not appear in /home/jenkins/minikube-integration/22061-2739/kubeconfig
	E1211 01:03:19.361742  167144 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/insufficient-storage-930985/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-930985" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-930985
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-930985: (1.961677984s)
--- PASS: TestInsufficientStorage (12.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (299.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3302985145 start -p running-upgrade-335241 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3302985145 start -p running-upgrade-335241 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.522719676s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-335241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1211 01:12:21.216856    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:13:44.288009    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:15:09.648257    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:15:15.961150    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-335241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.530596152s)
helpers_test.go:176: Cleaning up "running-upgrade-335241" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-335241
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-335241: (2.001332014s)
--- PASS: TestRunningBinaryUpgrade (299.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3646286219 start -p missing-upgrade-724666 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3646286219 start -p missing-upgrade-724666 --memory=3072 --driver=docker  --container-runtime=crio: (1m7.414743547s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-724666
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-724666
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-724666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-724666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.084138304s)
helpers_test.go:176: Cleaning up "missing-upgrade-724666" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-724666
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-724666: (2.611993882s)
--- PASS: TestMissingContainerUpgrade (123.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-899269 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-899269 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (93.227704ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-899269] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22061
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22061-2739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22061-2739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-899269 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-899269 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.726520726s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-899269 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.926430606s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-899269 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-899269 status -o json: exit status 2 (396.329636ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-899269","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-899269
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-899269: (2.148481684s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-899269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.589420652s)
--- PASS: TestNoKubernetes/serial/Start (9.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22061-2739/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-899269 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-899269 "sudo systemctl is-active --quiet service kubelet": exit status 1 (453.08286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-arm64 profile list: (3.268066731s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-899269
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-899269: (1.424669902s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-899269 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-899269 --driver=docker  --container-runtime=crio: (7.370644335s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-899269 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-899269 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.863345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (303.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.183401702 start -p stopped-upgrade-421398 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.183401702 start -p stopped-upgrade-421398 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.343954083s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.183401702 -p stopped-upgrade-421398 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.183401702 -p stopped-upgrade-421398 stop: (1.275702033s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-421398 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1211 01:06:32.728606    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:07:21.216411    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:08:19.032673    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:10:09.648560    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/addons-903947/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1211 01:10:15.961102    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-786978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-421398 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.674605579s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (303.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-421398
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-421398: (1.737241152s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.74s)

                                                
                                    
x
+
TestPause/serial/Start (81.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-906108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-906108 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.647147787s)
--- PASS: TestPause/serial/Start (81.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-906108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1211 01:17:21.216937    4875 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22061-2739/.minikube/profiles/functional-976823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-906108 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.006667898s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.03s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-685635 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-685635" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-685635
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard